I hope I got the CC list right. Apologies to anyone in didn't include
and anyone I shouldn't have included.
The basic idea is to include an idea from VMS that seems to be quite
useful: version numbers for files.
The idea is that whenever you modify a file the system saves it to na
new copy leaving the old file intact. This could be a great advantage
from many view points:
1) it would be much easier to do package management as the old
version would be automatically saved for a package
management system to deal with.
2) backups would also be easier as all versions of a file
are automatically saved so it could be potentially very
useful for a company or the like.
There are probably many others but these were the two that I liked best.
Revision numbers could be specified as follows:
/path/to/file:revision_number
I think that this can be done without breaking userspace if the default
was to open the highest revision file if no revision number is
specified. The userspace tools would need to be updated to take full
advantage of the new system but if the delimiter between the path and
revision number were chosen sensibly then the changes to most of
userspace would be minimal to non-existant.
Personally, I think that the bulk of the implementation could be in the
core fs code and the modifications to individual filesystems would be
minimal. The main implementation ideas I have (however, I am no kernel
expert =) are adding an extra field to struct file and struct inode
called int revision (as version is already taken) that would hold the
number of the file revision being accessed.
Another problem could be the increased usage of disk space. However if
only deltas from the first version were stored then this could cut down
on space, or if this were too slow to open a file then the deltas could
be off every tenth revision (ie 0,10,20,30... where 0,10,20... are full
copies of the file).
There would need to be a tool of some describtion to remove old
revisions but this should not be a major undertaking as it may be
something as simple as a new system call. This would have to be careful
to update any deltas that were affected by the removal of previous
revisions but that could be taken care of in kernel space.
Thanks to anyone who stuck with me this far =). I don't know how widely
useful this may be but that's the reason I posted before trying to code
anything. I would very much value any contributions even a reasoned NAK
as I'm still learning how kernel development works (and I would love any
implementation directions)
Jack
Jack Stone wrote:
> I hope I got the CC list right. Apologies to anyone in didn't include
> and anyone I shouldn't have included.
>
> The basic idea is to include an idea from VMS that seems to be quite
> useful: version numbers for files.
>
> The idea is that whenever you modify a file the system saves it to na
> new copy leaving the old file intact. This could be a great advantage
> from many view points:
> 1) it would be much easier to do package management as the old
> version would be automatically saved for a package
> management system to deal with.
>
> 2) backups would also be easier as all versions of a file
> are automatically saved so it could be potentially very
> useful for a company or the like.
>
This is one of those things that seems like a good idea, but frequently
ends up short. Part of the problem is that "whenever you modify a file"
is ill-defined, or rather, if you were to take the literal meaning of it
you'd end up with an unmanageable number of revisions.
Furthermore, it turns out that often relationships between files are
more important.
Thus, in the end it turns out that this stuff is better handled by
explicit version-control systems (which require explicit operations to
manage revisions) and atomic snapshots (for backup.)
-hpa
On Fri, 15 Jun 2007, H. Peter Anvin wrote:
> Jack Stone wrote:
>> I hope I got the CC list right. Apologies to anyone in didn't include
>> and anyone I shouldn't have included.
>>
>> The basic idea is to include an idea from VMS that seems to be quite
>> useful: version numbers for files.
>>
>> The idea is that whenever you modify a file the system saves it to na
>> new copy leaving the old file intact. This could be a great advantage
>> from many view points:
>> 1) it would be much easier to do package management as the old
>> version would be automatically saved for a package
>> management system to deal with.
>>
>> 2) backups would also be easier as all versions of a file
>> are automatically saved so it could be potentially very
>> useful for a company or the like.
>>
>
> This is one of those things that seems like a good idea, but frequently
> ends up short. Part of the problem is that "whenever you modify a file"
> is ill-defined, or rather, if you were to take the literal meaning of it
> you'd end up with an unmanageable number of revisions.
And no drive space.
> Furthermore, it turns out that often relationships between files are
> more important.
And there are files that are not important at all.
Would you save every temp file? To be meaningful you would need to know
the process they were tied to in many cases.
> Thus, in the end it turns out that this stuff is better handled by
> explicit version-control systems (which require explicit operations to
> manage revisions) and atomic snapshots (for backup.)
ZFS is the cool new thing in that space. Too bad the license makes it
hard to incorporate it into the kernel. (I am one of those people that
believe that Linux should support EVERY file system, no matter how old or
obscure.)
--
"ANSI C says access to the padding fields of a struct is undefined.
ANSI C also says that struct assignment is a memcpy. Therefore struct
assignment in ANSI C is a violation of ANSI C..."
- Alan Cox
Jack Stone wrote:
> I hope I got the CC list right. Apologies to anyone in didn't include
> and anyone I shouldn't have included.
>
> The basic idea is to include an idea from VMS that seems to be quite
> useful: version numbers for files.
>
> The idea is that whenever you modify a file the system saves it to na
> new copy leaving the old file intact. This could be a great advantage
> from many view points:
> 1) it would be much easier to do package management as the old
> version would be automatically saved for a package
> management system to deal with.
>
> 2) backups would also be easier as all versions of a file
> are automatically saved so it could be potentially very
> useful for a company or the like.
>
> There are probably many others but these were the two that I liked best.
>
> Revision numbers could be specified as follows:
> /path/to/file:revision_number
>
>
> I think that this can be done without breaking userspace if the default
> was to open the highest revision file if no revision number is
> specified. The userspace tools would need to be updated to take full
> advantage of the new system but if the delimiter between the path and
> revision number were chosen sensibly then the changes to most of
> userspace would be minimal to non-existant.
>
> Personally, I think that the bulk of the implementation could be in the
> core fs code and the modifications to individual filesystems would be
> minimal. The main implementation ideas I have (however, I am no kernel
> expert =) are adding an extra field to struct file and struct inode
> called int revision (as version is already taken) that would hold the
> number of the file revision being accessed.
>
> Another problem could be the increased usage of disk space. However if
> only deltas from the first version were stored then this could cut down
> on space, or if this were too slow to open a file then the deltas could
> be off every tenth revision (ie 0,10,20,30... where 0,10,20... are full
> copies of the file).
>
> There would need to be a tool of some describtion to remove old
> revisions but this should not be a major undertaking as it may be
> something as simple as a new system call. This would have to be careful
> to update any deltas that were affected by the removal of previous
> revisions but that could be taken care of in kernel space.
>
> Thanks to anyone who stuck with me this far =). I don't know how widely
> useful this may be but that's the reason I posted before trying to code
> anything. I would very much value any contributions even a reasoned NAK
> as I'm still learning how kernel development works (and I would love any
> implementation directions)
>
> Jack
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
The underlying internal implementation of something like this wouldn't be all
that hard on many filesystems, but it's the interface that's the problem. The
':' character is a perfectly legal filename character, so doing it that way
would break things. I think NetApp more or less got the interface right by
putting a .snapshot directory in each directory, with time-versioned
subdirectories each containing snapshots of that directory's contents at those
points in time. It keeps the backups under the same hierarchy as the original
files, to avoid permissions headaches, it's accessible over NFS without
modifying the client at all, and it's hidden just enough to make it hard for
users to do something stupid.
If you want to do something like this (and it's generally not a bad idea), make
sure you do it in a way that's not going to change the behavior seen by existing
applications, and that is accessible to unmodified remote clients. Hidden
.snapshot directories are one way, a parallel /backup filesystem could be
another, whatever. If you break existing apps, I won't touch it with a ten foot
pole.
-- Chris
Jack Stone wrote:
> I hope I got the CC list right. Apologies to anyone in didn't include
> and anyone I shouldn't have included.
>
> The basic idea is to include an idea from VMS that seems to be quite
> useful: version numbers for files.
<snip>
have you looked into ext3cow? it allows you to take snapshots of the entire ext3
fs at a single point, and rollback / extract snapshots at any time later. This
may be sufficient for you and the implementation seems to be rather stable already.
Cheers,
Auke
http://www.ext3cow.com/
alan wrote:
>
> ZFS is the cool new thing in that space. Too bad the license makes it
> hard to incorporate it into the kernel. (I am one of those people that
> believe that Linux should support EVERY file system, no matter how old
> or obscure.)
>
I have details on the Luxor UFD-DOS filesystem, if you'd care to
implement it.
-hpa
On Fri, 15 Jun 2007, Kok, Auke wrote:
> Jack Stone wrote:
>> I hope I got the CC list right. Apologies to anyone in didn't include
>> and anyone I shouldn't have included.
>>
>> The basic idea is to include an idea from VMS that seems to be quite
>> useful: version numbers for files.
>
> <snip>
>
> have you looked into ext3cow? it allows you to take snapshots of the entire
> ext3 fs at a single point, and rollback / extract snapshots at any time
> later. This may be sufficient for you and the implementation seems to be
> rather stable already.
As long as there is only one person using the file system. Rolling back
the entire filesystem may work well for you, but screw up something else
someone else is doing.
And what kind of rights do you have to assign to the user to do that level
of snapshot and rollback? You have to assume that there are more than one
user and that they have less than root privileges.
--
"ANSI C says access to the padding fields of a struct is undefined.
ANSI C also says that struct assignment is a memcpy. Therefore struct
assignment in ANSI C is a violation of ANSI C..."
- Alan Cox
On Fri, 15 Jun 2007, H. Peter Anvin wrote:
> alan wrote:
>>
>> ZFS is the cool new thing in that space. Too bad the license makes it
>> hard to incorporate it into the kernel. (I am one of those people that
>> believe that Linux should support EVERY file system, no matter how old
>> or obscure.)
>>
>
> I have details on the Luxor UFD-DOS filesystem, if you'd care to
> implement it.
Do you have example discs that can be mounted to test it? If you do, I
will consider doing it.
I have a couple of older DOS filesystems that got dropped out years ago
that I actually need to mount disks that i may rewrite for 2.6.x.
Now all i need is the time.
And speaking of obscure information...
I have a bunch of PCMCIA spec documents from the PCMCIA standards
association from the late 90s. Would anyone involved in maintaining the
PCMCIA code be interested in it? (Especially if they are in Portland.)
It has been a while since I have even needed to look at it and I hate for
it to go to waste if it can be of any use. (Bit late now, I know...)
--
"ANSI C says access to the padding fields of a struct is undefined.
ANSI C also says that struct assignment is a memcpy. Therefore struct
assignment in ANSI C is a violation of ANSI C..."
- Alan Cox
alan wrote:
> On Fri, 15 Jun 2007, H. Peter Anvin wrote:
>> This is one of those things that seems like a good idea, but frequently
>> ends up short. Part of the problem is that "whenever you modify a file"
>> is ill-defined, or rather, if you were to take the literal meaning of it
>> you'd end up with an unmanageable number of revisions.
>
> And no drive space.
>
One of the key points of the implementation would have to be the ability
to delete old revisions without affecting the subsequent revisions. This
would allow people to keep the number of revisions down.
Also if each revision is in effect a patch on the last revision it could
cut down the disk space required to store them, or if that takes to long
to read a file then have every tenth version (0,10,20,30...not the tenth
versions I know but easier to read) as a full version of the file which
all future versions are changed off.
>> Furthermore, it turns out that often relationships between files are
>> more important.
>
> And there are files that are not important at all.
>
> Would you save every temp file? To be meaningful you would need to know
> the process they were tied to in many cases.
I hadn't considered that but I did think that you could remove the old
revisions of a file at some configurable time after. This would allow
recovery in case of accidental deletion but should keep the disk space
usage down.
>> Thus, in the end it turns out that this stuff is better handled by
>> explicit version-control systems (which require explicit operations to
>> manage revisions) and atomic snapshots (for backup.)
Possibly but if I use it to manage my entire system (ie as a package
manager) then the system would likely explode if I tryed to update or
remove a key package whilst the system was running. With the kernel
involved then the process could be much smoother.
> ZFS is the cool new thing in that space. Too bad the license makes it
> hard to incorporate it into the kernel. (I am one of those people that
> believe that Linux should support EVERY file system, no matter how old
> or obscure.)
Chris Snook wrote:
> The underlying internal implementation of something like this wouldn't
> be all that hard on many filesystems, but it's the interface that's the
> problem. The ':' character is a perfectly legal filename character, so
> doing it that way would break things.
But to work without breaking userspace it would need to be a character
that would pass through any path checking routines, ie be a legal path
character.
> I think NetApp more or less got the interface right by putting a
> .snapshot directory in each directory, with time-versioned
> subdirectories each containing snapshots of that directory's contents
> at those points in time. It keeps the backups under the same
> hierarchy as the original files, to avoid permissions headaches,
> it's accessible over NFS without modifying the client at all,
> and it's hidden just enough to make it hard for users to do something
> stupid.
My personal implementation idea was to store lots of files for the form
file:revision_number (I'll keep using that until somebody sugests
something better) on the file system itself, with a hard link form the
latest version to file (this is probably not a major imporvement and
having the hard link coudl make it hard to implement deltas). This could
mean no changes to the file system itself (except maybe a flag to say
its versioned). The kernel would then do the translation to find the
correct file, and would only show the latest version to userapps not
requesting a specific version.
> If you want to do something like this (and it's generally not a bad
> idea), make sure you do it in a way that's not going to change the
> behavior seen by existing applications, and that is accessible to
> unmodified remote clients. Hidden .snapshot directories are one way, a
> parallel /backup filesystem could be another, whatever. If you break
> existing apps, I won't touch it with a ten foot pole.
The whole interface would be designed to give existing behavior as
default for two reasons: users are used to opening a file and getting
the latest version and not to break userspace. I personally wouldn't
touch this either if it broke userspace. The only userspace change would
be the addition of tools to manage the revisions etc. Userspace could
later upgrade to take advantage of the new functionality but I cannot
see the worth in breaking it.
For an example of a working implementation see:
http://www.o3one.org/filesystem.html
Jack
This already exists -- it just not open sourced, and you could spend
years trying to create it. Trust me, once you start dealing with the
distributed issues with this, its gets very complex. I am not meaning
to discourage you, but there are patents already filed on this on
Linux. So you need to consider these as well, and there are several
folks who are already doing this or have done it. If it goes into
Microsoft endorsed cross licensed Linuxes It may be ok (Vertias sold
this capability to Microsoft already, about 12 patents there to worry
over). There's also another patent filed as well. It's a noble effort
to do a free version, but be aware there's some big guns with patents
out there already, not to mention doing this is complex beyond belief.
Anyway good luck. ~~~~
Jeffrey V. Merkey wrote:
>
> This already exists -- it just not open sourced, and you could spend
> years trying to create it. Trust me, once you start dealing with the
> distributed issues with this, its gets very complex. I am not meaning
> to discourage you, but there are patents already filed on this on
> Linux. So you need to consider these as well, and there are several
> folks who are already doing this or have done it. If it goes into
> Microsoft endorsed cross licensed Linuxes It may be ok (Vertias sold
> this capability to Microsoft already, about 12 patents there to worry
> over). There's also another patent filed as well. It's a noble
> effort to do a free version, but be aware there's some big guns with
> patents out there already, not to mention doing this is complex beyond
> belief.
> Anyway good luck. ~~~~
> -
> To unsubscribe from this list: send the line "unsubscribe
> linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
I reviewed your sample implementation, and it appears to infringe 3
patents already. You should do some research on this. ~~~~
Hi,
On Fri, Jun 15, 2007 at 04:01:14PM -0700, alan wrote:
> On Fri, 15 Jun 2007, Kok, Auke wrote:
> ><snip>
> >
> >have you looked into ext3cow? it allows you to take snapshots of the
> >entire ext3 fs at a single point, and rollback / extract snapshots at any
> >time later. This may be sufficient for you and the implementation seems to
> >be rather stable already.
>
> As long as there is only one person using the file system. Rolling back
> the entire filesystem may work well for you, but screw up something else
> someone else is doing.
>
> And what kind of rights do you have to assign to the user to do that level
> of snapshot and rollback? You have to assume that there are more than one
> user and that they have less than root privileges.
Perhaps BTRFS might be of interest where you can have a subvolume for every
user: http://lkml.org/lkml/2007/6/12/242
Hannes
"Jeffrey V. Merkey" <[email protected]> writes:
> This already exists -- it just not open sourced, and you could spend
> years trying to create it. Trust me, once you start dealing with the
> distributed issues with this, its gets very complex. I am not meaning
> to discourage you, but there are patents already filed on this on
> Linux. So you need to consider these as well, and there are several
> folks who are already doing this or have done it. If it goes into
> Microsoft endorsed cross licensed Linuxes It may be ok (Vertias sold
> this capability to Microsoft already, about 12 patents there to worry
> over). There's also another patent filed as well. It's a noble
> effort to do a free version, but be aware there's some big guns with
> patents out there already, not to mention doing this is complex beyond
> belief.
Would such patents still be valid? He does not seem to be describing
anything that the ICL VME/B operating system did not do in the 1970s, so
any applicable patents should have eithe expired by now or be
invalidated by prior art.
> I reviewed your sample implementation, and it appears to infringe 3
> patents already. You should do some research on this. ~~~~
Are you able to tell us which areas of the code infringe existing patents?
Cheers,
Mark
--
Dave: Just a question. What use is a unicyle with no seat? And no pedals!
Mark: To answer a question with a question: What use is a skateboard?
Dave: Skateboards have wheels.
Mark: My wheel has a wheel!
On Fri, 15 June 2007 15:51:07 -0700, alan wrote:
>
> >Thus, in the end it turns out that this stuff is better handled by
> >explicit version-control systems (which require explicit operations to
> >manage revisions) and atomic snapshots (for backup.)
>
> ZFS is the cool new thing in that space. Too bad the license makes it
> hard to incorporate it into the kernel.
It may be the coolest, but there are others as well. Btrfs looks good,
nilfs finally has a cleaner and may be worth a try, logfs will get
snapshots sooner or later. Heck, even my crusty old cowlinks can be
viewed as snapshots.
If one has spare cycles to waste, working on one of those makes more
sense than implementing file versioning.
Jörn
--
"Security vulnerabilities are here to stay."
-- Scott Culp, Manager of the Microsoft Security Response Center, 2001
On Sat, Jun 16, 2007 at 04:12:14AM -0600, Jeffrey V. Merkey wrote:
> Jeffrey V. Merkey wrote:
> >over). There's also another patent filed as well. It's a noble
> >effort to do a free version, but be aware there's some big guns with
> >patents out there already, not to mention doing this is complex beyond
> >belief.
>
> I reviewed your sample implementation, and it appears to infringe 3
> patents already. You should do some research on this. ~~~~
First of all, you are responding to someone in the UK, I thought they
didn't even have software patents over there. Second, I didn't see any
implementation, just a high level description. Finally advising anyone
(who is not an actual patent lawyer that could correctly interpret the
language and scope of a patent) to go search out patents seems pretty
bad advice. That can only result in not even attempting to research some
potentially new and innovative approach.
Researching prior published work in the area is considerably more
helpful. Especially when something is complex beyond belief it has
probably attracted various researchers over time and there are most
likely various different solutions that have been explored previously.
Such existing work can form a good basis for further work.
Finally, even if there are patents they could be too limited in scope,
overly broad, can be invalidated due to prior art. It may also be
possible that a patent holder has no problem granting a royalty free
license for a GPL licensed implementation.
Jan
Mark Williamson wrote:
>>I reviewed your sample implementation, and it appears to infringe 3
>>patents already. You should do some research on this. ~~~~
>>
>>
>
>Are you able to tell us which areas of the code infringe existing patents?
>
>
Yes.
Jeff
>Cheers,
>Mark
>
>
>
Jan Harkes wrote:
>On Sat, Jun 16, 2007 at 04:12:14AM -0600, Jeffrey V. Merkey wrote:
>
>
>>Jeffrey V. Merkey wrote:
>>
>>
>>>over). There's also another patent filed as well. It's a noble
>>>effort to do a free version, but be aware there's some big guns with
>>>patents out there already, not to mention doing this is complex beyond
>>>belief.
>>>
>>>
>>I reviewed your sample implementation, and it appears to infringe 3
>>patents already. You should do some research on this. ~~~~
>>
>>
>
>First of all, you are responding to someone in the UK, I thought they
>didn't even have software patents over there. Second, I didn't see any
>implementation, just a high level description. Finally advising anyone
>(who is not an actual patent lawyer that could correctly interpret the
>language and scope of a patent) to go search out patents seems pretty
>bad advice. That can only result in not even attempting to research some
>potentially new and innovative approach.
>
>Researching prior published work in the area is considerably more
>helpful. Especially when something is complex beyond belief it has
>probably attracted various researchers over time and there are most
>likely various different solutions that have been explored previously.
>Such existing work can form a good basis for further work.
>
>Finally, even if there are patents they could be too limited in scope,
>overly broad, can be invalidated due to prior art. It may also be
>possible that a patent holder has no problem granting a royalty free
>license for a GPL licensed implementation.
>
>Jan
>
>
>
>
When you get into the recycling issues with storage, the patents come
into play. Also, using the file name to reference revisions is already
the subject of a patent previously filed (I no longer own the patent, I
sold them to Canopy). There is a third one about to be issued.
The patents are:
*
6,862,609
**6,795,895
and this one about to be issued:
http://www.wipo.int/pctdb/en/fetch.jsp?LANG=ENG&DBSELECT=PCT&SERVER_TYPE=19&SORT=1211506-KEY&TYPE_FIELD=256&IDB=0&IDOC=1205953&C=10&ELEMENT_SET=IA,WO,TTL-EN&RESULT=1&TOTAL=3&START=1&DISP=25&FORM=SEP-0/HITNUM,B-ENG,DP,MC,PA,ABSUM-ENG&SEARCH_IA=US2005045566&QUERY=%28IN%2fmerkey%29+
The last one was filed with WIPO and has international protection, UK
included.
Jeff
*
Jeff
Jeffrey V. Merkey wrote:
> When you get into the recycling issues with storage, the patents come
> into play. Also, using the file name to reference revisions is already
> the subject of a patent previously filed (I no longer own the patent, I
> sold them to Canopy). There is a third one about to be issued.
>
> The patents are:
> *
> 6,862,609
> **6,795,895
>
> and this one about to be issued:
>
> http://www.wipo.int/pctdb/en/fetch.jsp?LANG=ENG&DBSELECT=PCT&SERVER_TYPE=19&SORT=1211506-KEY&TYPE_FIELD=256&IDB=0&IDOC=1205953&C=10&ELEMENT_SET=IA,WO,TTL-EN&RESULT=1&TOTAL=3&START=1&DISP=25&FORM=SEP-0/HITNUM,B-ENG,DP,MC,PA,ABSUM-ENG&SEARCH_IA=US2005045566&QUERY=%28IN%2fmerkey%29+
>
>
> The last one was filed with WIPO and has international protection, UK
> included.
I have no idea about patents so if anyone could point me in the right
direction I would be most obliged
Jack
> http://www.wipo.int/pctdb/en/fetch.jsp?LANG=ENG&DBSELECT=PCT&SERVER_TYPE=19&SORT=1211506-KEY&TYPE_FIELD=256&IDB=0&IDOC=1205953&C=10&ELEMENT_SET=IA,WO,TTL-EN&RESULT=1&TOTAL=3&START=1&DISP=25&FORM=SEP-0/HITNUM,B-ENG,DP,MC,PA,ABSUM-ENG&SEARCH_IA=US2005045566&QUERY=%28IN%2fmerkey%29+
>
> The last one was filed with WIPO and has international protection, UK
> included.
Nope. EU and UK law does not recognize software as patentable. See the
caselaw.
Alan Cox wrote:
>>http://www.wipo.int/pctdb/en/fetch.jsp?LANG=ENG&DBSELECT=PCT&SERVER_TYPE=19&SORT=1211506-KEY&TYPE_FIELD=256&IDB=0&IDOC=1205953&C=10&ELEMENT_SET=IA,WO,TTL-EN&RESULT=1&TOTAL=3&START=1&DISP=25&FORM=SEP-0/HITNUM,B-ENG,DP,MC,PA,ABSUM-ENG&SEARCH_IA=US2005045566&QUERY=%28IN%2fmerkey%29+
>>
>>The last one was filed with WIPO and has international protection, UK
>>included.
>>
>>
>
>Nope. EU and UK law does not recognize software as patentable. See the
>caselaw.
>
>
Thanks for clarifying Alan, I was uncertain about current patent laws in
the UK and abroad. I know this area has undergone considerable debate
and changes recently, and I have not been keeping up with all of it.
Jeff
>-
>To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>the body of a message to [email protected]
>More majordomo info at http://vger.kernel.org/majordomo-info.html
>Please read the FAQ at http://www.tux.org/lkml/
>
>
>
On Sat, Jun 16, 2007 at 02:03:49PM -0600, Jeffrey V. Merkey wrote:
> Jan Harkes wrote:
> >implementation, just a high level description. Finally advising anyone
> >(who is not an actual patent lawyer that could correctly interpret the
> >language and scope of a patent) to go search out patents seems pretty
> >bad advice. That can only result in not even attempting to research some
> >potentially new and innovative approach.
> >
> >Researching prior published work in the area is considerably more
> >helpful. Especially when something is complex beyond belief it has
> >probably attracted various researchers over time and there are most
> >likely various different solutions that have been explored previously.
> >Such existing work can form a good basis for further work.
>
> When you get into the recycling issues with storage, the patents come
> into play. Also, using the file name to reference revisions is already
> the subject of a patent previously filed (I no longer own the patent, I
> sold them to Canopy). There is a third one about to be issued.
Congratulations on obtaining those patents, I hope they will be used
wisely. I am however not a patent lawyer and as such in no position to
evaluate their claims.
As a more useful response, the original poster may want to look at some
of the prior work in this area, I just picked a couple,
(Cedar File System from Xerox PARC)
A Caching File System for a Programmer's Workstation (1985)
Michael D. Schroeder, David K. Gifford, Roger M. Needham
(Vax/VMS System Software Handbook)
(TOPS-20 User's Manual)
(Plan 9 (file system))
Plan 9 from Bell Labs (1990)
Rob Pike, Dave Presotto, Sean Dorward, Bob Flandrena, Ken Thompson,
Howard Trickey, Phil Winterbottom
(Elephant File System)
Elephant: The File System that Never Forgets (1999)
Douglas J. Santry, Michael J. Feeley, Norman C. Hutchinson
Workshop on Hot Topics in Operating Systems
Deciding when to forget in the Elephant file system (1999)
Douglas S. Santry, Michael J. Feeley, Norman C. Hutchinson,
Alistair C. Veitch, Ross W. Carton, Jacob Otir
(Ext3Cow)
Ext3cow: The Design, Implementation, and Analysis of Metadata for a
Time-Shifting File System (2003)
Zachary N. J. Peterson, Randal C. Burns
Sites like portal.acm.org and citeseer.ist.psu.edu are good places to
find copies of these papers. They also provide links to other work that
either is cited by, or cites these papers which is a convenient way to
find other papers in this area.
Researching, designing and implementing such systems is a lot of fun,
admittedly often more fun than long term debugging and/or maintaining,
but that is life. Don't get too upset if the end result cannot be
included in the main kernel. Just start over from scratch, you may just
end up with an even better design the second time around.
Jan
Jan Harkes wrote:
> Sites like portal.acm.org and citeseer.ist.psu.edu are good places to
> find copies of these papers. They also provide links to other work that
> either is cited by, or cites these papers which is a convenient way to
> find other papers in this area.
>
> Researching, designing and implementing such systems is a lot of fun,
> admittedly often more fun than long term debugging and/or maintaining,
> but that is life. Don't get too upset if the end result cannot be
> included in the main kernel. Just start over from scratch, you may just
> end up with an even better design the second time around.
Thank you very much for the info and the advice.
I would also like to thank everyone for the help and enchouragement that
they have given to me.
Jack
DEC had versioning files systems 30 years ago. Any
patents on their style must certainly have expired
long ago.
Look at RSX-11 and other seventies era operating
systems.
This is ancient stuff.
> (Vax/VMS System Software Handbook)
> (TOPS-20 User's Manual)
Also Files/11
Basic versioning goes back to at least ITS
Not sure how old doing file versioning and hiding it away with a tool to
go rescue the stuff you blew away by mistake is, but Novell Netware 3
certainly did a good job on that one
Alan Cox wrote:
>> (Vax/VMS System Software Handbook)
>> (TOPS-20 User's Manual)
>>
>>
>
>Also Files/11
>
>Basic versioning goes back to at least ITS
>
>Not sure how old doing file versioning and hiding it away with a tool to
>go rescue the stuff you blew away by mistake is, but Novell Netware 3
>certainly did a good job on that one
>
>
The trick in the NetWare 3 model was to segregate the directory entries
onto special reserved
4K directory blocks (128 byte dir records). When it came time to purge
storage after the file system filled, an entire 4K block and all
chains was deleted during block allocation for new files. The dir blocks
were ordered by date -- oldest ones got purged
first. The model worked very well until compression was added to the
filesystem, then it started getting complex.
I would be willing to help instrument the NetWare 3 model in this
proposal on ext3, since this is a basic versioning model and
would provide coverage for 99% of the folks needing this capability. I
am available for questions.
Jeff
>-
>To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>the body of a message to [email protected]
>More majordomo info at http://vger.kernel.org/majordomo-info.html
>Please read the FAQ at http://www.tux.org/lkml/
>
>
>
Jeffrey V. Merkey wrote:
> The trick in the NetWare 3 model was to segregate the directory
> entries onto special reserved
> 4K directory blocks (128 byte dir records). When it came time to purge
> storage after the file system filled, an entire 4K block and all
> chains was deleted during block allocation for new files. The dir
> blocks were ordered by date -- oldest ones got purged
> first. The model worked very well until compression was added to the
> filesystem, then it started getting complex.
>
> I would be willing to help instrument the NetWare 3 model in this
> proposal on ext3, since this is a basic versioning model and
> would provide coverage for 99% of the folks needing this capability. I
> am available for questions.
>
> Jeff
>
I resigned as Chief Scientist of Solera Networks May 8 this year --
mostly because I was not allowed to have any freedom, including working on
free Linux projects. This went on for almost 4 years (since 2003). Now
that I am independent again, I can work on stuff like this again. I
started a new company with John Noorda (Ray Noorda's oldest son) who
runs Canopy now. That's my current status. I am an owner and entrepeneur
again. So I have a lot of free time and will from now on.
Jeff
On Sat, Jun 16, 2007 at 11:17:58PM +0100, Alan Cox wrote:
> > (Vax/VMS System Software Handbook)
> > (TOPS-20 User's Manual)
>
> Also Files/11
And don't forget the really ground breaking work (for the
time) done by the Xanadu folk.
On Jun 16, 2007 16:53 +0200, J?rn Engel wrote:
> On Fri, 15 June 2007 15:51:07 -0700, alan wrote:
> > >Thus, in the end it turns out that this stuff is better handled by
> > >explicit version-control systems (which require explicit operations to
> > >manage revisions) and atomic snapshots (for backup.)
> >
> > ZFS is the cool new thing in that space. Too bad the license makes it
> > hard to incorporate it into the kernel.
>
> It may be the coolest, but there are others as well. Btrfs looks good,
> nilfs finally has a cleaner and may be worth a try, logfs will get
> snapshots sooner or later. Heck, even my crusty old cowlinks can be
> viewed as snapshots.
>
> If one has spare cycles to waste, working on one of those makes more
> sense than implementing file versioning.
Too bad everyone is spending time on 10 similar-but-slightly-different
filesystems. This will likely end up with a bunch of filesystems that
implement some easy subset of features, but will not get polished for
users or have a full set of features implemented (e.g. ACL, quota, fsck,
etc). While I don't think there is a single answer to every question,
it does seem that the number of filesystem projects has climbed lately.
Maybe there should be a BOF at OLS to merge these filesystem projects
(btrfs, chunkfs, tilefs, logfs, etc) into a single project with multiple
people working on getting it solid, scalable (parallel readers/writers on
lots of CPUs), robust (checksums, failure localization), recoverable, etc.
I thought Val's FS summits were designed to get developers to collaborate,
but it seems everyone has gone back to their corners to work on their own
filesystem?
Working on getting hooks into DM/MD so that the filesystem and RAID layers
can move beyond "ignorance is bliss" when talking to each other would be
great. Not rebuilding empty parts of the fs, limit parity resync to parts
of the fs that were in the previous transaction, use fs-supplied checksums
to verify on-disk data is correct, use RAID geometry when doing allocations,
etc.
Cheers, Andreas
--
Andreas Dilger
Principal Software Engineer
Cluster File Systems, Inc.
Andreas Dilger wrote:
> Too bad everyone is spending time on 10 similar-but-slightly-different
> filesystems. This will likely end up with a bunch of filesystems that
> implement some easy subset of features, but will not get polished for
> users or have a full set of features implemented (e.g. ACL, quota, fsck,
> etc). While I don't think there is a single answer to every question,
> it does seem that the number of filesystem projects has climbed lately.
>
> Maybe there should be a BOF at OLS to merge these filesystem projects
> (btrfs, chunkfs, tilefs, logfs, etc) into a single project with multiple
> people working on getting it solid, scalable (parallel readers/writers on
> lots of CPUs), robust (checksums, failure localization), recoverable, etc.
> I thought Val's FS summits were designed to get developers to collaborate,
> but it seems everyone has gone back to their corners to work on their own
> filesystem?
>
> Working on getting hooks into DM/MD so that the filesystem and RAID layers
> can move beyond "ignorance is bliss" when talking to each other would be
> great. Not rebuilding empty parts of the fs, limit parity resync to parts
> of the fs that were in the previous transaction, use fs-supplied checksums
> to verify on-disk data is correct, use RAID geometry when doing allocations,
> etc.
How many of these feature could be (at least partially) implemented in
the core code so that it was easier to port filesystems to use them.
Certainly versioning should be able to be implemented, possible without
any FS support at all, in the core code. Centralising the
implementations could allow a great deal of FS
custiomisation/improvements with very little new per FS code.
Jack
On Mon, 18 June 2007 03:45:24 -0600, Andreas Dilger wrote:
>
> Too bad everyone is spending time on 10 similar-but-slightly-different
> filesystems. This will likely end up with a bunch of filesystems that
> implement some easy subset of features, but will not get polished for
> users or have a full set of features implemented (e.g. ACL, quota, fsck,
> etc). While I don't think there is a single answer to every question,
> it does seem that the number of filesystem projects has climbed lately.
There definitely seems to be an inflation of new filesystems. These
days all the cool kids either write their own virtualization layer or
their own filesystem. No idea why that happened, two years ago
filesystems seemed old and boring.
> Maybe there should be a BOF at OLS to merge these filesystem projects
> (btrfs, chunkfs, tilefs, logfs, etc) into a single project with multiple
> people working on getting it solid, scalable (parallel readers/writers on
> lots of CPUs), robust (checksums, failure localization), recoverable, etc.
Consider me sceptical. Here is my personal opinion when looking at the
list:
Chunkfs, tilefs - research projects.
At this moment nobody knows whether either of the approaches works or
not. Once that is proven, the tricks should get incorporated in
existing filesystems.
Dave Chinner seems to be working on similar stuff for XFS already.
Assuming he can deliver, a chunked/tiled/... XFS is useful while chunkfs
and tilefs are mostly educational. It doesn't have to be XFS, Ext4
would do just as well.
Logfs - flash filesystem.
Btrfs - disk filesystem.
Disk optimization comes down to avoiding seeks like the plague. Flash
requires garbage collection, wear leveling and avoiding writes like the
plague.
It is quite unlikely that either filesystem will do well in the other's
domain. At least some amount of code will remain seperate. Subsets of
code might be useful for both. Collaboration on that level would be
useful.
Jörn
--
Joern's library part 4:
http://www.paulgraham.com/spam.html
On Mon, Jun 18, 2007 at 03:45:24AM -0600, Andreas Dilger wrote:
> Too bad everyone is spending time on 10 similar-but-slightly-different
> filesystems. This will likely end up with a bunch of filesystems that
> implement some easy subset of features, but will not get polished for
> users or have a full set of features implemented (e.g. ACL, quota, fsck,
> etc). While I don't think there is a single answer to every question,
> it does seem that the number of filesystem projects has climbed lately.
I view some of the attempts for "from scratch" filesystems as ways of
testing out various designs as "proof-of-concepts". It's a great way
of demo'ing ones ideas, to see how well they work. There is a huge
chasm between a proof-of-concept and a full production filesystem that
has great repair/recovery tools, etc. That's why it's so important to
do the POC implementation first, so folks can see how well it works
before investing a huge amount of effort to make it be
production-ready.
So I actually think the number of these new filesystem proposals are
*good* things. It means people are interested in creating new
filesystems, and that's all good. Eventually, we'll need to decide
which design ideas should be combined, and that may be a little tough
to the egos involved, but that's all part of the darwinian kernel
programming model. Not all implementations make it into the kernel
mainline. That doesn't mean that the work that was done on the
various schedular proposals were useless; they just helped demonstrate
concepts and advanced the debate.
Regards,
- Ted
On Mon, Jun 18, 2007 at 03:45:24AM -0600, Andreas Dilger wrote:
> Too bad everyone is spending time on 10 similar-but-slightly-different
> filesystems. This will likely end up with a bunch of filesystems that
> implement some easy subset of features, but will not get polished for
> users or have a full set of features implemented (e.g. ACL, quota, fsck,
> etc). While I don't think there is a single answer to every question,
> it does seem that the number of filesystem projects has climbed lately.
>
> Maybe there should be a BOF at OLS to merge these filesystem projects
> (btrfs, chunkfs, tilefs, logfs, etc) into a single project with multiple
> people working on getting it solid, scalable (parallel readers/writers on
> lots of CPUs), robust (checksums, failure localization), recoverable, etc.
> I thought Val's FS summits were designed to get developers to collaborate,
> but it seems everyone has gone back to their corners to work on their own
> filesystem?
Unfortunately, I can't do OLS this year, but anyone who wants to talk on
these things can drop me a line and we can setup phone calls or whatever
for planning. Adding polish to any FS is not a one man show, and so I know
I'll need to get more people on board to really finish btrfs off.
One of my long term goals for btrfs is to figure out the features and
layout people are most interested in for filesystems that don't have to
be ext* backwards compatible. I've got a pretty good start, but I'm
sure parts of it will change if I can get a big enough developer base.
>
> Working on getting hooks into DM/MD so that the filesystem and RAID layers
> can move beyond "ignorance is bliss" when talking to each other would be
> great. Not rebuilding empty parts of the fs, limit parity resync to parts
> of the fs that were in the previous transaction, use fs-supplied checksums
> to verify on-disk data is correct, use RAID geometry when doing allocations,
> etc.
Definitely. There's a lot of work in the DM integration bits that are
not FS specific.
-chris
On Mon, 18 Jun 2007, Theodore Tso wrote:
> On Mon, Jun 18, 2007 at 03:45:24AM -0600, Andreas Dilger wrote:
>> Too bad everyone is spending time on 10 similar-but-slightly-different
>> filesystems. This will likely end up with a bunch of filesystems that
>> implement some easy subset of features, but will not get polished for
>> users or have a full set of features implemented (e.g. ACL, quota, fsck,
>> etc). While I don't think there is a single answer to every question,
>> it does seem that the number of filesystem projects has climbed lately.
>
> I view some of the attempts for "from scratch" filesystems as ways of
> testing out various designs as "proof-of-concepts". It's a great way
> of demo'ing ones ideas, to see how well they work. There is a huge
> chasm between a proof-of-concept and a full production filesystem that
> has great repair/recovery tools, etc. That's why it's so important to
> do the POC implementation first, so folks can see how well it works
> before investing a huge amount of effort to make it be
> production-ready.
>
I just wish that people would learn from the mistakes of others. The
MacOS is a prime example of why you do not want to use a forked
filesystem, yet some people still seem to think it is a good idea.
(Forked filesystems tend to be fragile and do not play well with
non-forked filesystems.)
> So I actually think the number of these new filesystem proposals are
> *good* things. It means people are interested in creating new
> filesystems, and that's all good. Eventually, we'll need to decide
> which design ideas should be combined, and that may be a little tough
> to the egos involved, but that's all part of the darwinian kernel
> programming model. Not all implementations make it into the kernel
> mainline. That doesn't mean that the work that was done on the
> various schedular proposals were useless; they just helped demonstrate
> concepts and advanced the debate.
I would like to see more clarification from the designers as to what
problem they are trying to solve. Some of the goals seem to be laudable,
but some are not problems that I worry about.
I see filesystems that are trying to handle the flakeyness of hardware.
That is useful to me. I also see people who are trying to archive every
little change for "legal reasons". I have a hard time with this one
because I have a hard enough time keeping spare hard drive space for the
stuff I want, not the space that someone else wants me to keep.
What I really want are high throughput systems where I can write and read
as fast as the hardware will allow. (And then I want faster hardware.)
--
"ANSI C says access to the padding fields of a struct is undefined.
ANSI C also says that struct assignment is a memcpy. Therefore struct
assignment in ANSI C is a violation of ANSI C..."
- Alan Cox
Bryan Henderson wrote:
>> Part of the problem is that "whenever you modify a file"
>> is ill-defined, or rather, if you were to take the literal meaning of it
>> you'd end up with an unmanageable number of revisions.
>
> Let me expand on that. Do you want to save a revision every time the user
> types in an editor? Every time he runs a "save" command? Every time a
> program does a write() system call? Every time a program closes a
> modified file? If you're adding to a C program, is every draft you
> compile a revision, or just the final modification after the bugs are
> worked out?
>
> When I was very new to coding, I used VMS and thought the automatic
> revisioning would be a great thing because it would save me when I
> modified a program and later regretted it. The system made a revision
> every time I exited the editor. But I soon found that the "previous
> revision" to which I wanted to revert was always many editings back, since
> I spent a lot of time trying to make the regrettable code work before
> giving up. VMS kept a fixed number of revisions per file. But keeping 20
> versions of other files would have been wasteful of disk space, directory
> listing space, etc.
>
> Later, I discovered what I think are superior alternatives: RCS-style
> version management on top of the filesystem, and automatic versioning
> based on time instead of count of "modifications." For example, make a
> copy of every changed file every hour and keep it for a day and keep one
> of those for a week, and keep one of those for a month, etc. This works
> even without snapshot technology and even without sub-file deltas. But of
> course, it's better with those.
>From what I can see this seems to be the consesus (and it sound very
sensible to me).
The question remains is where to implement versioning: directly in
individual filesystems or in the vfs code so all filesystems can use it?
Jack
Jack Stone wrote:
>>
>> Later, I discovered what I think are superior alternatives: RCS-style
>> version management on top of the filesystem, and automatic versioning
>> based on time instead of count of "modifications." For example, make a
>> copy of every changed file every hour and keep it for a day and keep one
>> of those for a week, and keep one of those for a month, etc. This works
>> even without snapshot technology and even without sub-file deltas. But of
>> course, it's better with those.
>
> From what I can see this seems to be the consesus (and it sound very
> sensible to me).
>
> The question remains is where to implement versioning: directly in
> individual filesystems or in the vfs code so all filesystems can use it?
>
More likely a shim filesystem on top would be a better option.
-hpa
On Mon, Jun 18, 2007 at 09:16:30AM -0700, alan wrote:
>
> I just wish that people would learn from the mistakes of others. The
> MacOS is a prime example of why you do not want to use a forked
> filesystem, yet some people still seem to think it is a good idea.
> (Forked filesystems tend to be fragile and do not play well with
> non-forked filesystems.)
Jeremy Alison used to be the one who was always pestering me to add
Streams support into ext4, but recently he's admitted that I was right
that it was a Very Bad Idea.
As I mentioned in my Linux.conf.au presentation a year and a half ago,
the main use of Streams in Windows to date has been for system
crackers to hide trojan horse code and rootkits so that system
administrators couldn't find them. :-)
- Ted
Theodore Tso wrote:
>
> As I mentioned in my Linux.conf.au presentation a year and a half ago,
> the main use of Streams in Windows to date has been for system
> crackers to hide trojan horse code and rootkits so that system
> administrators couldn't find them. :-)
>
But... that's an essential feature in Windows! Otherwise, how would
Microsoft produce a RKDK (Rootkit development kit) to sell to Sony?
-hpa
On Mon, Jun 18, 2007 at 01:29:56PM -0400, Theodore Tso wrote:
> On Mon, Jun 18, 2007 at 09:16:30AM -0700, alan wrote:
> >
> > I just wish that people would learn from the mistakes of others. The
> > MacOS is a prime example of why you do not want to use a forked
> > filesystem, yet some people still seem to think it is a good idea.
> > (Forked filesystems tend to be fragile and do not play well with
> > non-forked filesystems.)
>
> Jeremy Alison used to be the one who was always pestering me to add
> Streams support into ext4, but recently he's admitted that I was right
> that it was a Very Bad Idea.
Yeah, ok - but do you have to rub my nose in it every chance you get ?
:-) :-).
Jeremy.
>The question remains is where to implement versioning: directly in
>individual filesystems or in the vfs code so all filesystems can use it?
Or not in the kernel at all. I've been doing versioning of the types I
described for years with user space code and I don't remember feeling that
I compromised in order not to involve the kernel.
Of course, if you want to do it with snapshots and COW, you'll have to ask
where in the kernel to put that, but that's not a file versioning
question; it's the larger snapshot question.
--
Bryan Henderson IBM Almaden Research Center
San Jose CA Filesystems
On Mon, Jun 18, 2007 at 10:33:42AM -0700, Jeremy Allison wrote:
>
> Yeah, ok - but do you have to rub my nose in it every chance you get ?
>
> :-) :-).
Well, I just want to make sure people know that Samba isn't asking for
it any more, and I don't know of any current requests outstanding from
any of the userspace projects. So there's no one we need to ship off
to the re-education camps about why filesystem fork/streams are a bad
idea. :-)
- Ted
On Mon, Jun 18, 2007 at 04:30:33PM -0400, Theodore Tso wrote:
> Well, I just want to make sure people know that Samba isn't asking for
> it any more, and I don't know of any current requests outstanding from
> any of the userspace projects. So there's no one we need to ship off
> to the re-education camps about why filesystem fork/streams are a bad
> idea. :-)
NFSv4 also has support for "named attributes"/forks/streams. As far as
I can recall, though, the only requests I've seen for it (aside from
people just running through NFSv4 feature checklists) have been from
people that really just wanted extended attributes.
And it'd probably actually be pretty easy to extend the v4 protocol to
support extended attributes if someone wanted to.
--b.
alan <[email protected]> wrote:
> I just wish that people would learn from the mistakes of others. The
> MacOS is a prime example of why you do not want to use a forked
> filesystem, yet some people still seem to think it is a good idea.
> (Forked filesystems tend to be fragile and do not play well with
> non-forked filesystems.)
What's the conceptual difference between forks and extended user attributes?
--
"Unix policy is to not stop root from doing stupid things because
that would also stop him from doing clever things." - Andi Kleen
"It's such a fine line between stupid and clever" - Derek Smalls
On Mon, 18 Jun 2007, Bodo Eggert wrote:
> alan <[email protected]> wrote:
>
>> I just wish that people would learn from the mistakes of others. The
>> MacOS is a prime example of why you do not want to use a forked
>> filesystem, yet some people still seem to think it is a good idea.
>> (Forked filesystems tend to be fragile and do not play well with
>> non-forked filesystems.)
>
> What's the conceptual difference between forks and extended user attributes?
Forks tend to contain more than just extended attributes. They contain
all sorts of other meta-data including icons, descriptions, author
information, copyright data, and whatever else can be shoveled into them
by the author/user.
--
"ANSI C says access to the padding fields of a struct is undefined.
ANSI C also says that struct assignment is a memcpy. Therefore struct
assignment in ANSI C is a violation of ANSI C..."
- Alan Cox
alan wrote:
> On Mon, 18 Jun 2007, Bodo Eggert wrote:
>
>> alan <[email protected]> wrote:
>>
>>> I just wish that people would learn from the mistakes of others. The
>>> MacOS is a prime example of why you do not want to use a forked
>>> filesystem, yet some people still seem to think it is a good idea.
>>> (Forked filesystems tend to be fragile and do not play well with
>>> non-forked filesystems.)
>>
>> What's the conceptual difference between forks and extended user
>> attributes?
>
> Forks tend to contain more than just extended attributes. They contain
> all sorts of other meta-data including icons, descriptions, author
> information, copyright data, and whatever else can be shoveled into them
> by the author/user.
And that makes them different from extended attributes, how?
Both of these really are nothing but ad hocky syntactic sugar for
directories, sometimes combined with in-filesystem support for small
data items.
-hpa
On Mon, Jun 18, 2007 at 02:31:14PM -0700, H. Peter Anvin wrote:
> And that makes them different from extended attributes, how?
Streams on systems that support them allow lseek and are
accessed by fd's. EA's are always a blob of data, read/written
in their entirity.
Jeremy.
On Mon, Jun 18, 2007 at 02:31:14PM -0700, H. Peter Anvin wrote:
> And that makes them different from extended attributes, how?
>
> Both of these really are nothing but ad hocky syntactic sugar for
> directories, sometimes combined with in-filesystem support for small
> data items.
There's a good discussion of the issues involved in my LCA 2006
presentation.... which doesn't seem to be on the LCA 2006 site. Hrm.
I'll have to ask that this be fixed. In any case, here it is:
http://thunk.org/tytso/forkdepot.odp
- Ted
On Mon, 18 June 2007 18:10:21 -0400, Theodore Tso wrote:
> On Mon, Jun 18, 2007 at 02:31:14PM -0700, H. Peter Anvin wrote:
> > And that makes them different from extended attributes, how?
> >
> > Both of these really are nothing but ad hocky syntactic sugar for
> > directories, sometimes combined with in-filesystem support for small
> > data items.
>
> There's a good discussion of the issues involved in my LCA 2006
> presentation.... which doesn't seem to be on the LCA 2006 site. Hrm.
> I'll have to ask that this be fixed. In any case, here it is:
>
> http://thunk.org/tytso/forkdepot.odp
The main difference appears to be the potential size. Both extended
attributes and forks allow for extra data that I neither want or need.
But once the extra space is large enough to hide a rootkit in, it
becomes a security problem instead of just something pointless.
Pointless here means that _I_ don't see the point. Maybe there are
valid uses for extended attributes. If there are, noone has explained
them to me yet.
Jörn
--
They laughed at Galileo. They laughed at Copernicus. They laughed at
Columbus. But remember, they also laughed at Bozo the Clown.
-- unknown
On Tue, Jun 19, 2007 at 12:26:57AM +0200, J?rn Engel wrote:
>
> Pointless here means that _I_ don't see the point. Maybe there are
> valid uses for extended attributes. If there are, noone has explained
> them to me yet.
Samba uses them to store DOS'ism's that you don't want in your
POSIX filesystem.
Jeremy.
On Mon, 18 Jun 2007, H. Peter Anvin wrote:
> alan wrote:
>> On Mon, 18 Jun 2007, Bodo Eggert wrote:
>>
>>> alan <[email protected]> wrote:
>>>
>>>> I just wish that people would learn from the mistakes of others. The
>>>> MacOS is a prime example of why you do not want to use a forked
>>>> filesystem, yet some people still seem to think it is a good idea.
>>>> (Forked filesystems tend to be fragile and do not play well with
>>>> non-forked filesystems.)
>>>
>>> What's the conceptual difference between forks and extended user
>>> attributes?
>>
>> Forks tend to contain more than just extended attributes. They contain
>> all sorts of other meta-data including icons, descriptions, author
>> information, copyright data, and whatever else can be shoveled into them
>> by the author/user.
>
> And that makes them different from extended attributes, how?
The amount of crap. Both seem to become a collection bin for "stuff we
need to describe this object". Forks seem to get more piled on, but they
are effectively the same thing.
> Both of these really are nothing but ad hocky syntactic sugar for
> directories, sometimes combined with in-filesystem support for small
> data items.
And both tend to break when you go to a file system that does not support
them.
--
"ANSI C says access to the padding fields of a struct is undefined.
ANSI C also says that struct assignment is a memcpy. Therefore struct
assignment in ANSI C is a violation of ANSI C..."
- Alan Cox
On Mon, Jun 18, 2007 at 06:10:21PM -0400, Theodore Tso wrote:
> On Mon, Jun 18, 2007 at 02:31:14PM -0700, H. Peter Anvin wrote:
> > And that makes them different from extended attributes, how?
> >
> > Both of these really are nothing but ad hocky syntactic sugar for
> > directories, sometimes combined with in-filesystem support for small
> > data items.
>
> There's a good discussion of the issues involved in my LCA 2006
> presentation.... which doesn't seem to be on the LCA 2006 site. Hrm.
> I'll have to ask that this be fixed. In any case, here it is:
>
> http://thunk.org/tytso/forkdepot.odp
Did you ever code up forkdepot ? Just wondering ?
Just because I now agree with you that streams are
a bad idea doesn't mean the pressure to support them
in some way in Samba has gone away :-).
Jeremy.
On Tue, 19 Jun 2007, J?rn Engel wrote:
> On Mon, 18 June 2007 18:10:21 -0400, Theodore Tso wrote:
>> On Mon, Jun 18, 2007 at 02:31:14PM -0700, H. Peter Anvin wrote:
>>> And that makes them different from extended attributes, how?
>>>
>>> Both of these really are nothing but ad hocky syntactic sugar for
>>> directories, sometimes combined with in-filesystem support for small
>>> data items.
>>
>> There's a good discussion of the issues involved in my LCA 2006
>> presentation.... which doesn't seem to be on the LCA 2006 site. Hrm.
>> I'll have to ask that this be fixed. In any case, here it is:
>>
>> http://thunk.org/tytso/forkdepot.odp
>
> The main difference appears to be the potential size. Both extended
> attributes and forks allow for extra data that I neither want or need.
> But once the extra space is large enough to hide a rootkit in, it
> becomes a security problem instead of just something pointless.
>
> Pointless here means that _I_ don't see the point. Maybe there are
> valid uses for extended attributes. If there are, noone has explained
> them to me yet.
Most of the extended attribute systems I have seen have been a set of
flags. "If this bit is set, the user can do thus to this object."
Sometimes it is a named attribute that is attached to the object.
Forks tend to be "this blob of data is attached to this object".
With forks, the choices tend to be a lot more arbitrary.
--
"ANSI C says access to the padding fields of a struct is undefined.
ANSI C also says that struct assignment is a memcpy. Therefore struct
assignment in ANSI C is a violation of ANSI C..."
- Alan Cox
On Mon, 18 Jun 2007, Jeremy Allison wrote:
> Just because I now agree with you that streams are
> a bad idea doesn't mean the pressure to support them
> in some way in Samba has gone away :-).
Having dealt with Stream's support[1] in the past, I can assure you it is
a bad idea. ]:>
[1] http://www.stream.com/
--
"ANSI C says access to the padding fields of a struct is undefined.
ANSI C also says that struct assignment is a memcpy. Therefore struct
assignment in ANSI C is a violation of ANSI C..."
- Alan Cox
On Mon, Jun 18, 2007 at 11:32:38AM -0400, Chris Mason wrote:
> On Mon, Jun 18, 2007 at 03:45:24AM -0600, Andreas Dilger wrote:
> > Too bad everyone is spending time on 10 similar-but-slightly-different
> > filesystems. This will likely end up with a bunch of filesystems that
> > implement some easy subset of features, but will not get polished for
> > users or have a full set of features implemented (e.g. ACL, quota, fsck,
> > etc). While I don't think there is a single answer to every question,
> > it does seem that the number of filesystem projects has climbed lately.
> >
> > Maybe there should be a BOF at OLS to merge these filesystem projects
> > (btrfs, chunkfs, tilefs, logfs, etc) into a single project with multiple
> > people working on getting it solid, scalable (parallel readers/writers on
> > lots of CPUs), robust (checksums, failure localization), recoverable, etc.
> > I thought Val's FS summits were designed to get developers to collaborate,
> > but it seems everyone has gone back to their corners to work on their own
> > filesystem?
>
> Unfortunately, I can't do OLS this year, but anyone who wants to talk on
> these things can drop me a line and we can setup phone calls or whatever
> for planning. Adding polish to any FS is not a one man show, and so I know
> I'll need to get more people on board to really finish btrfs off.
>
> One of my long term goals for btrfs is to figure out the features and
> layout people are most interested in for filesystems that don't have to
> be ext* backwards compatible. I've got a pretty good start, but I'm
> sure parts of it will change if I can get a big enough developer base.
I have no filesystem programming experience, but I am certainly
interested, and I'm spending some time reading through the code that
you've written so far. Oh, and running it - though I'm probably going
to want to fiddle with some smaller filesystems than my entire Maildir
set if I want to make any sense of the structure dumps!
That and of course if I get involved in development I can be sure that
my favourite workload (big Cyrus installs) is well optimized for!
Actually, my biggest interest is decent unlink performance, in
particular when you are unlinking multiple items in a directory or
even the entire directory plus everything in it. I find that to be
an incredibly slow and IO hurting operation. We run cyr_expire
(the process in Cyrus that actually deletes expunged messages) once
per week, and only one process at a time on a machine which might have
20 otherwise busy instances of Cyrus running - because the IO hit on
those data partitions is massive. Load average more than doubles and
the log entries for commands which took longer than a second to return
increase massively.
And this is on a Sunday when there's barely any use compared to a
weekday.
So yeah, my main interest is making unlink (especially multiple unlinks
from the same directory) into a less extreme experience.
Bron.
On Tue, Jun 19, 2007 at 12:26:57AM +0200, J?rn Engel wrote:
> Pointless here means that _I_ don't see the point. Maybe there are
> valid uses for extended attributes. If there are, noone has explained
> them to me yet.
The users of extended attributes that I've dealt with are ACL support
and SELinux. These both use extended attributes under the covers. It's
just not immediately obvious if you aren't looking.
Brad Boyer
[email protected]
On Jun 18, 2007, at 13:56:05, Bryan Henderson wrote:
>> The question remains is where to implement versioning: directly in
>> individual filesystems or in the vfs code so all filesystems can
>> use it?
>
> Or not in the kernel at all. I've been doing versioning of the
> types I described for years with user space code and I don't
> remember feeling that I compromised in order not to involve the
> kernel.
>
> Of course, if you want to do it with snapshots and COW, you'll have
> to ask where in the kernel to put that, but that's not a file
> versioning question; it's the larger snapshot question.
What I think would be particularly interesting in this domain is
something similar in concept to GIT, except in a file-system:
1) Redundancy is easy, you just ensure that you have at least "N"
distributed copies of each object, where "N" is some function of the
object itself.
2) Network replication is easy, you look up objects based on the
SHA-1 stored in the parent directory entry and cache them where
needed (IE: make the "N" function above dynamic based on frequency of
access on a given computer).
3) Snapshots are easy and cheap; an RO snapshot is a tag and an RW
snapshot is a branch. These can be easily converted between.
4) Compression is easy; you can compress objects based on any
arbitrary configurable criteria and the filesystem will record
whether or not an object is compressed. You can also compress
differently when archiving objects to secondary storage.
5) Streaming fsck-like verification is easy; ensure the hash name
field matches the actual hash of the object.
6) Fsck is easy since rollback is trivial, you can always revert
to an older tree to boot and start up services before attempting
resurrection of lost objects and trees in the background.
7) Multiple-drive or multiple-host storage pools are easy: Think
the git "alternates" file.
8) Network filesystem load-balancing is easy; SHA-1s are
essentially random so you can just assign SHA-1 prefixes to different
systems for data storage and your data is automatically split up.
Other issues:
Q. How do you deal with block allocation?
A. Same way other filesystems deal with block allocation. Reference-
counting gets tricky, especially across a network, but it's easy to
play it safe with simple cross-network refcount-journalling. Since
the _only_ thing that needs journalling is the refcounts and block-
free data, you need at most a megabyte or two of journal. If in
doubt, it's easy to play it safe and keep an extra refcount around
for an in-the-background consistency check later on. When networked-
gitfs systems crash, you just assume they still have all the
refcounts they had at the moment they died, and compare notes when
they start back up again. If a node has a cached copy of data on its
local disk then it can just nonatomically increment the refcount for
that object in its own RAM (ordered with respect to disk-flushes, of
course) and tell its peers at some point. A node should probably
cache most of its working set on local disk for efficiency; it's
trivially verified against updates from other nodes and provides an
easy way to keep refcounts for such data. If a node increments the
refcount on such data and dies before getting that info out to its
peers, then when it starts up again its peers will just be told that
it has a "new" node with insufficient replication and they will clone
it out again properly. For networked refcount-increments you can do
one of 2 things: (1) Tell at least X many peers and wait for them to
sync the update out to disk, or (2) Get the object from any peer (at
least one of whom hopefully has it in RAM) and save it to local disk
with an increased refcount.
Q. How do you actually delete things?
A. Just replace all the to-be-erased tree and commit objects before a
specified point with "History erased" objects with their SHA-1's
magically set to that of the erased objects. If you want you may
delete only the "tree" objects and leave the commits intact. If you
delete a whole linear segment of history then you can just use a
single "History erased" commit object with its parent pointed to the
object before the erased segment. Probably needs some form of back-
reference storage to make it efficient; not sure how expensive that
would be. This would allow making a bunch of snapshots and purging
them logarithmically based on passage of time. For instance, you
might have snapshots of every 5 minutes for the last hour, every 30
minutes for the last day, every 4 hours for the last week, every day
for the last month, once per week for the last year, once per month
for the last 5 years, and once per year beyond that.
That's pretty impressive data-recovery resolution, and it accounts
for only 200 unique commits after it's been running for 10 years.
Q. How do you archive data?
A. Same as deleting, except instead of a "History erased" object you
would use a "History archived" object with a little bit of string
data to indicate which volume it's stored on (and where on the
volume). When you stick that volume into the system you could easily
tell the kernel to use it as an alternate for the given storage group.
Q. What enforces data integrity?
A. Ensure that a new tree object and its associated sub objects are
on disk before you delete the old one. Doesn't need any actual full
syncs at all, just barriers. If you replace the tree object before
write-out is complete then just skip writing the old one and write
the new one in its place.
Q. What consists of a "commit"?
A. Anything the administrator wants to define it as. Useful
algorithms include: "Once per x Mbyte of page dirtying", "Once per 5
min", "Only when sync() or fsync() are called", "Only when gitfs-
commit is called". You could even combine them: "Every x Mbyte of
page dirtying or every 5 minutes, whichever is shorter (or longer,
depending on admin requirements)". There would also be appropriate
syscalls to trigger appropriate git-like behavior. Network-
accessible gitfs would want to have mechanisms to trigger commits
based on activity on other systems (needs more thought).
Q. How do you access old versions?
A. Mount another instance of the filesystem with an SHA-1 ID, a tag-
name, or a branch-name in a special mount option. Should be user
accessible with some restrictions (needs more thought).
Q. How do you deal with conflicts on networked filesystems.
A. Once again, however the administrator wants to deal with them.
Options:
1) Forcibly create a new branch for the conflicted tree.
2) Attempt to merge changes using the standard git-merge semantics
3) Merge independent changes to different files and pick one for
changes to the same file
4) Your Algorithm Here(TM). GIT makes it easy to extend
conflict-resolution.
Q. How do you deal with little scattered changes in big (or sparse)
files?
A. Two questions, two answers: For sparse files, git would need
extending to understand (and hash) the nature of the sparse-ness.
For big files, you should be able to introduce a "compound-file"
datatype and configure git to deal with specific X-Mbyte chunks of it
independently. This might not be a bad idea for native git as well.
Would need system-specific configuration.
Q. How do you prevent massive data consumption by spurious tiny changes
A. You have a few options:
1) Configure your commit algorithm as above to not commit so often
2) Configure a stepped commit-discard algorithm as described
above in the "How do you delete things" question
3) Archive unused data to secondary storage more often
Q. What about all the unanswered questions?
A. These are all the ones I could think of off the top of my head but
there are at least a hundred more. I'm pretty sure these are some of
the most significant ones.
Q. That's a great idea and I'll implement it right away!
A. Yay! (but that's not a question :-D) Good luck and happy hacking.
Q. That's a stupid idea and would never ever work!
A. Thanks for your useful input! (but that's not a question either)
I'm sure anybody who takes up a project like this will consider such
opinions.
Q. *flamage*
A. I'm glad you have such strong opinions, feel free to to continue
to spam my /dev/null device (and that's also not a question).
All opinions and comments welcomed.
Cheers,
Kyle Moffett
On Jun 18, 2007, at 17:24:23, Brad Boyer wrote:
> On Tue, Jun 19, 2007 at 12:26:57AM +0200, J?rn Engel wrote:
>> Pointless here means that _I_ don't see the point. Maybe there
>> are valid uses for extended attributes. If there are, noone has
>> explained them to me yet.
>
> The users of extended attributes that I've dealt with are ACL
> support and SELinux. These both use extended attributes under the
> covers. It's just not immediately obvious if you aren't looking.
Yeah, extended attributes are typically used for exactly that:
"attributes" like labels, permissions, encoding, cached file-type,
DOS/Windows/Mac metadata, etc. Sometimes people suggest sticking
icons in there, but that's probably a bad idea. At most stick an
"icon label" attribute which refers to a file "/usr/share/icons/
by_attr/$ICON_LABEL.png". If you're trying to put more than 256
bytes of data in an extended attribute then you're probably doing
something wrong. They're very good for cached attributes (like file-
type) where you don't care if the data is lost by "tar", and they're
reasonable for security-related attributes where you don't want
attribute-unaware programs trying to save and restore them (like
SELinux labels).
Cheers,
Kyle Moffett
On Tue, Jun 19, 2007 at 12:26:57AM +0200, J?rn Engel wrote:
> The main difference appears to be the potential size. Both extended
> attributes and forks allow for extra data that I neither want or need.
> But once the extra space is large enough to hide a rootkit in, it
> becomes a security problem instead of just something pointless.
The other difference is that you can't execute an extended attribute.
You can store kvm/qemu, a complete virtualization enviroment, shared
libraries, and other executables all inside a forks inside a file, and
then execute programs/rootkit out of said file fork(s).
As I mentioned in my LCA presentation, one system administrator
refused to upgrade beyond Solaris 8 because he thought forks were good
for nothing but letting system crackers hide rootkits that wouldn't be
detected by programs like tripwire. The question then is why in the
world would we want to replicate Sun's mistakes?
- Ted
On Mon, Jun 18, 2007 at 03:48:15PM -0700, Jeremy Allison wrote:
> Did you ever code up forkdepot ? Just wondering ?
There is a partial implementation lieing around somewhere, but there
were a number of problems we ran into that were discussed in the
slidedeck. Basically, if the only program accessing the files
containing forks was the Samba program calling forkdepot library, it
worked fine. But if there were other programs (or NFS servers) that
were potentially deleting files, moving files around, the things fell
apart fairly quickly.
> Just because I now agree with you that streams are
> a bad idea doesn't mean the pressure to support them
> in some way in Samba has gone away :-).
What, even with Winfs delaying Microsoft Longwait by years before
finally being flushed? :-)
- Ted
Kyle Moffett wrote:
> On Jun 18, 2007, at 13:56:05, Bryan Henderson wrote:
>>> The question remains is where to implement versioning: directly in
>>> individual filesystems or in the vfs code so all filesystems can use it?
>>
>> Or not in the kernel at all. I've been doing versioning of the types
>> I described for years with user space code and I don't remember
>> feeling that I compromised in order not to involve the kernel.
>>
>> Of course, if you want to do it with snapshots and COW, you'll have to
>> ask where in the kernel to put that, but that's not a file versioning
>> question; it's the larger snapshot question.
>
> What I think would be particularly interesting in this domain is
> something similar in concept to GIT, except in a file-system:
> 1) Redundancy is easy, you just ensure that you have at least "N"
> distributed copies of each object, where "N" is some function of the
> object itself.
> 2) Network replication is easy, you look up objects based on the SHA-1
> stored in the parent directory entry and cache them where needed (IE:
> make the "N" function above dynamic based on frequency of access on a
> given computer).
> 3) Snapshots are easy and cheap; an RO snapshot is a tag and an RW
> snapshot is a branch. These can be easily converted between.
> 4) Compression is easy; you can compress objects based on any
> arbitrary configurable criteria and the filesystem will record whether
> or not an object is compressed. You can also compress differently when
> archiving objects to secondary storage.
> 5) Streaming fsck-like verification is easy; ensure the hash name
> field matches the actual hash of the object.
> 6) Fsck is easy since rollback is trivial, you can always revert to an
> older tree to boot and start up services before attempting resurrection
> of lost objects and trees in the background.
> 7) Multiple-drive or multiple-host storage pools are easy: Think the
> git "alternates" file.
> 8) Network filesystem load-balancing is easy; SHA-1s are essentially
> random so you can just assign SHA-1 prefixes to different systems for
> data storage and your data is automatically split up.
>
>
> Other issues:
>
> Q. How do you deal with block allocation?
> A. Same way other filesystems deal with block allocation.
> Reference-counting gets tricky, especially across a network, but it's
> easy to play it safe with simple cross-network refcount-journalling.
> Since the _only_ thing that needs journalling is the refcounts and
> block-free data, you need at most a megabyte or two of journal. If in
> doubt, it's easy to play it safe and keep an extra refcount around for
> an in-the-background consistency check later on. When networked-gitfs
> systems crash, you just assume they still have all the refcounts they
> had at the moment they died, and compare notes when they start back up
> again. If a node has a cached copy of data on its local disk then it
> can just nonatomically increment the refcount for that object in its own
> RAM (ordered with respect to disk-flushes, of course) and tell its peers
> at some point. A node should probably cache most of its working set on
> local disk for efficiency; it's trivially verified against updates from
> other nodes and provides an easy way to keep refcounts for such data.
> If a node increments the refcount on such data and dies before getting
> that info out to its peers, then when it starts up again its peers will
> just be told that it has a "new" node with insufficient replication and
> they will clone it out again properly. For networked
> refcount-increments you can do one of 2 things: (1) Tell at least X many
> peers and wait for them to sync the update out to disk, or (2) Get the
> object from any peer (at least one of whom hopefully has it in RAM) and
> save it to local disk with an increased refcount.
>
> Q. How do you actually delete things?
> A. Just replace all the to-be-erased tree and commit objects before a
> specified point with "History erased" objects with their SHA-1's
> magically set to that of the erased objects. If you want you may delete
> only the "tree" objects and leave the commits intact. If you delete a
> whole linear segment of history then you can just use a single "History
> erased" commit object with its parent pointed to the object before the
> erased segment. Probably needs some form of back-reference storage to
> make it efficient; not sure how expensive that would be. This would
> allow making a bunch of snapshots and purging them logarithmically based
> on passage of time. For instance, you might have snapshots of every 5
> minutes for the last hour, every 30 minutes for the last day, every 4
> hours for the last week, every day for the last month, once per week for
> the last year, once per month for the last 5 years, and once per year
> beyond that.
>
> That's pretty impressive data-recovery resolution, and it accounts for
> only 200 unique commits after it's been running for 10 years.
>
> Q. How do you archive data?
> A. Same as deleting, except instead of a "History erased" object you
> would use a "History archived" object with a little bit of string data
> to indicate which volume it's stored on (and where on the volume). When
> you stick that volume into the system you could easily tell the kernel
> to use it as an alternate for the given storage group.
>
> Q. What enforces data integrity?
> A. Ensure that a new tree object and its associated sub objects are on
> disk before you delete the old one. Doesn't need any actual full syncs
> at all, just barriers. If you replace the tree object before write-out
> is complete then just skip writing the old one and write the new one in
> its place.
>
> Q. What consists of a "commit"?
> A. Anything the administrator wants to define it as. Useful algorithms
> include: "Once per x Mbyte of page dirtying", "Once per 5 min", "Only
> when sync() or fsync() are called", "Only when gitfs-commit is called".
> You could even combine them: "Every x Mbyte of page dirtying or every 5
> minutes, whichever is shorter (or longer, depending on admin
> requirements)". There would also be appropriate syscalls to trigger
> appropriate git-like behavior. Network-accessible gitfs would want to
> have mechanisms to trigger commits based on activity on other systems
> (needs more thought).
>
> Q. How do you access old versions?
> A. Mount another instance of the filesystem with an SHA-1 ID, a
> tag-name, or a branch-name in a special mount option. Should be user
> accessible with some restrictions (needs more thought).
>
> Q. How do you deal with conflicts on networked filesystems.
> A. Once again, however the administrator wants to deal with them. Options:
> 1) Forcibly create a new branch for the conflicted tree.
> 2) Attempt to merge changes using the standard git-merge semantics
> 3) Merge independent changes to different files and pick one for
> changes to the same file
> 4) Your Algorithm Here(TM). GIT makes it easy to extend
> conflict-resolution.
>
> Q. How do you deal with little scattered changes in big (or sparse) files?
> A. Two questions, two answers: For sparse files, git would need
> extending to understand (and hash) the nature of the sparse-ness. For
> big files, you should be able to introduce a "compound-file" datatype
> and configure git to deal with specific X-Mbyte chunks of it
> independently. This might not be a bad idea for native git as well.
> Would need system-specific configuration.
>
> Q. How do you prevent massive data consumption by spurious tiny changes
> A. You have a few options:
> 1) Configure your commit algorithm as above to not commit so often
> 2) Configure a stepped commit-discard algorithm as described above
> in the "How do you delete things" question
> 3) Archive unused data to secondary storage more often
>
> Q. What about all the unanswered questions?
> A. These are all the ones I could think of off the top of my head but
> there are at least a hundred more. I'm pretty sure these are some of
> the most significant ones.
>
> Q. That's a great idea and I'll implement it right away!
> A. Yay! (but that's not a question :-D) Good luck and happy hacking.
>
> Q. That's a stupid idea and would never ever work!
> A. Thanks for your useful input! (but that's not a question either) I'm
> sure anybody who takes up a project like this will consider such opinions.
>
> Q. *flamage*
> A. I'm glad you have such strong opinions, feel free to to continue to
> spam my /dev/null device (and that's also not a question).
>
> All opinions and comments welcomed.
>
> Cheers,
> Kyle Moffett
>
>
It sounds brilliant and I'd love to have a got at implementing it but I
don't know enough (yet :-D) about how git works, a little research is
called for I think.
Jack
On Mon, Jun 18, 2007 at 11:10:42PM -0400, Kyle Moffett wrote:
> On Jun 18, 2007, at 13:56:05, Bryan Henderson wrote:
>>> The question remains is where to implement versioning: directly in
>>> individual filesystems or in the vfs code so all filesystems can use it?
>>
>> Or not in the kernel at all. I've been doing versioning of the types I
>> described for years with user space code and I don't remember feeling that
>> I compromised in order not to involve the kernel.
>
> What I think would be particularly interesting in this domain is something
> similar in concept to GIT, except in a file-system:
I've written a couple of user-space things very much like this - one
being a purely database (blobs in database, yeah I know) system for
managing medical data, where signatures and auditability were the most
important part of the system. Performance really wasn't a
consideration.
The other one is my current job, FastMail - we have a virtual filesystem
which uses files stored by sha1 on ordainary filesystems for data
storage and a database for metadata (filename to sha1 mappings, mtime,
mimetype, directory structure, etc).
Multiple machine distribution is handled by a daemon on each machine
which can be asked to make sure the file gets sent out to every machine
that matches the prefix and will only return success once it's written
to at least one other machine. Database replication is a different
beast.
It can work, but there's one big pain at the file level: no mmap.
If you don't want to support mmap it can work reasonably happily, though
you may want to keep your sha1 (or other digest) state as well as the
final digest so you can cheaply calculate the digest for a small append
without walking the entire file. You may also want to keep state
checkpoints every so often along a big file so that truncates don't cost
too much to recalculate.
Luckily in a userspace VFS that's only accessed via FTP and DAV we can
support a limited set of operations (basically create, append, read,
delete) You don't get that luxury for a general purpose filesystem, and
that's the problem. There will always be particular usage patterns
(especially something that mmaps or seeks and touches all over the place
like a loopback mounted filesystem or a database file) that just dodn't
work for file-level sha1s.
It does have some lovely properties though. I'd enjoy working in an
envionment that didn't look much like POSIX but had the strong
guarantees and auditability that addressing by sha1 buys you.
Bron.
On 6/19/07, Kyle Moffett <[email protected]> wrote:
> What I think would be particularly interesting in this domain is
> something similar in concept to GIT, except in a file-system:
perhaps stating the blindingly obvious, but there was an early
implementation of a FUSE-based gitfs --
http://www.sfgoth.com/~mitch/linux/gitfs/
cheers,
martin
On Tue, Jun 19, 2007 at 03:05:07AM -0400, Theodore Tso wrote:
>
> There is a partial implementation lieing around somewhere, but there
> were a number of problems we ran into that were discussed in the
> slidedeck. Basically, if the only program accessing the files
> containing forks was the Samba program calling forkdepot library, it
> worked fine. But if there were other programs (or NFS servers) that
> were potentially deleting files, moving files around, the things fell
> apart fairly quickly.
I'd be happy with a Samba-only implementation for Appliance
vendors.
> What, even with Winfs delaying Microsoft Longwait by years before
> finally being flushed? :-)
I'm not talking WinFS, I'm talking streams..... Streams are already
being used (mainly by malware writers of course - but hey, don't
you want full compatibility ? :-).
Jeremy.
Jeremy Allison wrote:
>
> I'm not talking WinFS, I'm talking streams..... Streams are already
> being used (mainly by malware writers of course - but hey, don't
> you want full compatibility ? :-).
>
Reminds me of the Linux Journal (I believe?) article which did
viruses-on-Wine compatibility testing.
-hpa
Kyle Moffett wrote:
> On Jun 18, 2007, at 13:56:05, Bryan Henderson wrote:
>>> The question remains is where to implement versioning: directly in ?
>>> individual filesystems or in the vfs code so all filesystems can ?
>>> use it?
>>
>> Or not in the kernel at all. ?I've been doing versioning of the ?
>> types I described for years with user space code and I don't ?
>> remember feeling that I compromised in order not to involve the ?
>> kernel.
>>
>> Of course, if you want to do it with snapshots and COW, you'll have ?
>> to ask where in the kernel to put that, but that's not a file ?
>> versioning question; it's the larger snapshot question.
>
> What I think would be particularly interesting in this domain is ?
> something similar in concept to GIT, except in a file-system
[cut]
How it relates to ext3cow versioning (snapshotting) filesystem,
for example? ext3cow assumes linear history, which simplifies things
a bit.
--
Jakub Narebski
Warsaw, Poland
ShadeHawk on #git
Jack Stone wrote:
> Chris Snook wrote:
>> The underlying internal implementation of something like this wouldn't
>> be all that hard on many filesystems, but it's the interface that's the
>> problem. The ':' character is a perfectly legal filename character, so
>> doing it that way would break things.
>
> But to work without breaking userspace it would need to be a character
> that would pass through any path checking routines, ie be a legal path
> character.
>
>> I think NetApp more or less got the interface right by putting a
>> .snapshot directory in each directory, with time-versioned
>> subdirectories each containing snapshots of that directory's contents
>> at those points in time. It keeps the backups under the same
>> hierarchy as the original files, to avoid permissions headaches,
>> it's accessible over NFS without modifying the client at all,
>> and it's hidden just enough to make it hard for users to do something
>> stupid.
>
> My personal implementation idea was to store lots of files for the form
> file:revision_number (I'll keep using that until somebody sugests
> something better) on the file system itself, with a hard link form the
> latest version to file (this is probably not a major imporvement and
> having the hard link coudl make it hard to implement deltas). This could
> mean no changes to the file system itself (except maybe a flag to say
> its versioned). The kernel would then do the translation to find the
> correct file, and would only show the latest version to userapps not
> requesting a specific version.
I pointed out NetApp's .snapshot directories because that's a method that uses
legal path character, but doesn't break anything. With this method, userspace
tools will have to be taught that : is suddenly a special character. Userspace
already knows that files beginning with . are special and treat them specially.
We don't need a new special character for every new feature. We've got one,
and it's flexible enough to do what you want, as proven by NetApp's extremely
successful implementation. Perhaps you want a slightly different interface from
what NetApp has implemented, but what you're suggesting will change the default
behavior of basic tools like tar and ls. This is not a good thing.
>> If you want to do something like this (and it's generally not a bad
>> idea), make sure you do it in a way that's not going to change the
>> behavior seen by existing applications, and that is accessible to
>> unmodified remote clients. Hidden .snapshot directories are one way, a
>> parallel /backup filesystem could be another, whatever. If you break
>> existing apps, I won't touch it with a ten foot pole.
>
> The whole interface would be designed to give existing behavior as
> default for two reasons: users are used to opening a file and getting
> the latest version and not to break userspace. I personally wouldn't
> touch this either if it broke userspace. The only userspace change would
> be the addition of tools to manage the revisions etc. Userspace could
> later upgrade to take advantage of the new functionality but I cannot
> see the worth in breaking it.
But what you're talking about *will* break userspace. If I do an ls in a
directory, and get pages upon pages of versions of just one file, that's broken.
If I tar up a directory and get a tarball that's hundreds of times larger than
it should be, that's broken. If you want the files to be hidden from userspace
applications that don't know about your backup scheme, (and it sounds like you
do) then use the existing convention for hidden files, the prepended '.' This
is the universal sign for "don't mess with me unless you know what you're doing".
-- Chris
Chris Snook wrote:
> But what you're talking about *will* break userspace. If I do an ls in
> a directory, and get pages upon pages of versions of just one file,
> that's broken. If I tar up a directory and get a tarball that's
> hundreds of times larger than it should be, that's broken. If you want
> the files to be hidden from userspace applications that don't know about
> your backup scheme, (and it sounds like you do) then use the existing
> convention for hidden files, the prepended '.' This is the universal
> sign for "don't mess with me unless you know what you're doing".
The idea was that if you did an ls you would get the latest version of
the file without the :revision_num. The only visible version would be
the latest version, i.e. the current system would not change. The idea
was that it would only show earlier versions if they were specifically
requested with a :revision_num suffix. In that case the
filesystem/kernel would need to recognise the suffix and return the
earlier version of the file.
The only userspace it would break is files with :num in their name, as I
haven't seen any files like that I don't think its too big a problem but
the way of specifiying revisions could be changed.
Jack
Chris Snook wrote:
> I pointed out NetApp's .snapshot directories because that's a method
> that uses legal path character, but doesn't break anything. With this
> method, userspace tools will have to be taught that : is suddenly a
> special character.
Not to mention that the character historically used for this purpose is
; (semicolon.)
-hpa
H. Peter Anvin wrote:
> Chris Snook wrote:
>> I pointed out NetApp's .snapshot directories because that's a method
>> that uses legal path character, but doesn't break anything. With this
>> method, userspace tools will have to be taught that : is suddenly a
>> special character.
>
> Not to mention that the character historically used for this purpose is
> ; (semicolon.)
But that would cause havoc with shells which use ; to seperate commands.
Using ; would defiantly break userspace
Jack
Jack Stone wrote:
>
> But that would cause havoc with shells which use ; to seperate commands.
> Using ; would defiantly break userspace
>
Not really. It's just a bit awkward to use, but so's the whole concept.
-hpa
H. Peter Anvin wrote:
> Jack Stone wrote:
>> But that would cause havoc with shells which use ; to seperate commands.
>> Using ; would defiantly break userspace
>>
>
> Not really. It's just a bit awkward to use, but so's the whole concept.
I think we can all agree on that after this thread but I still don't
want people to have the wrong idea about the design.
Jack
Jack Stone wrote:
> Chris Snook wrote:
>> But what you're talking about *will* break userspace. If I do an ls in
>> a directory, and get pages upon pages of versions of just one file,
>> that's broken. If I tar up a directory and get a tarball that's
>> hundreds of times larger than it should be, that's broken. If you want
>> the files to be hidden from userspace applications that don't know about
>> your backup scheme, (and it sounds like you do) then use the existing
>> convention for hidden files, the prepended '.' This is the universal
>> sign for "don't mess with me unless you know what you're doing".
>
> The idea was that if you did an ls you would get the latest version of
> the file without the :revision_num. The only visible version would be
> the latest version, i.e. the current system would not change. The idea
> was that it would only show earlier versions if they were specifically
> requested with a :revision_num suffix. In that case the
> filesystem/kernel would need to recognise the suffix and return the
> earlier version of the file.
>
> The only userspace it would break is files with :num in their name, as I
> haven't seen any files like that I don't think its too big a problem but
> the way of specifiying revisions could be changed.
>
> Jack
I have one right now:
$ ls /tmp/ksocket-csnook/kdeinit*
/tmp/ksocket-csnook/kdeinit__0 /tmp/ksocket-csnook/kdeinit-:0
Note, I did not pass any special arguments to ls to make it pull up that file.
You'd have to modify ls to make it do that. You'd also need to modify
everything else out there. There are decades of programs out there that would
behave differently with the interface you propose.
The more fundamental problem with your proposed interface is that it treats a
filesystem like an opaque server, instead of a transparent data structure. You
want files to be completely invisible to applications that don't know about it,
unless the user requests it. Unfortunately, it doesn't work that way.
Applications ask for a directory listing, and will open the requested file if
and only if the filename in question appears in that listing. If you want to
use this opaque server model, you'd be better served putting it in some parallel
file system (say, /backup) that won't interfere with naive applications
accessing the mundane data. Personally, I like your idea of putting the older
versions in the same directory hierarchy, but I think you'd have to use .foo
hidden directories to do it right.
-- Chris
Chris Snook wrote:
> Jack Stone wrote:
>> The idea was that if you did an ls you would get the latest version of
>> the file without the :revision_num. The only visible version would be
>> the latest version, i.e. the current system would not change. The idea
>> was that it would only show earlier versions if they were specifically
>> requested with a :revision_num suffix. In that case the
>> filesystem/kernel would need to recognise the suffix and return the
>> earlier version of the file.
>>
>> The only userspace it would break is files with :num in their name, as I
>> haven't seen any files like that I don't think its too big a problem but
>> the way of specifiying revisions could be changed.
>>
>> Jack
>
> I have one right now:
>
> $ ls /tmp/ksocket-csnook/kdeinit*
> /tmp/ksocket-csnook/kdeinit__0 /tmp/ksocket-csnook/kdeinit-:0
>
> Note, I did not pass any special arguments to ls to make it pull up that
> file. You'd have to modify ls to make it do that. You'd also need to
> modify everything else out there. There are decades of programs out
> there that would behave differently with the interface you propose.
>
> The more fundamental problem with your proposed interface is that it
> treats a filesystem like an opaque server, instead of a transparent data
> structure. You want files to be completely invisible to applications
> that don't know about it, unless the user requests it. Unfortunately,
> it doesn't work that way. Applications ask for a directory listing, and
> will open the requested file if and only if the filename in question
> appears in that listing. If you want to use this opaque server model,
> you'd be better served putting it in some parallel file system (say,
> /backup) that won't interfere with naive applications accessing the
> mundane data. Personally, I like your idea of putting the older
> versions in the same directory hierarchy, but I think you'd have to use
> .foo hidden directories to do it right.
The whole idea of the file system is that it wouldn't return the file in
the file listing. The user would have to know that the file system was
versioning to access the older versions as they would explicitly have to
request them.
Jack
Jack Stone wrote:
> H. Peter Anvin wrote:
>> Chris Snook wrote:
>>> I pointed out NetApp's .snapshot directories because that's a method
>>> that uses legal path character, but doesn't break anything. With this
>>> method, userspace tools will have to be taught that : is suddenly a
>>> special character.
>> Not to mention that the character historically used for this purpose is
>> ; (semicolon.)
>
> But that would cause havoc with shells which use ; to seperate commands.
> Using ; would defiantly break userspace
>
> Jack
>
I can escape the semicolon just fine in bash. In fact, tab-completion will do
this automatically. That's really a non-issue. It just means that anyone who
wants to use this feature would have to know what they're doing, which I believe
is your goal, right?
-- Chris
Chris Snook wrote:
> Jack Stone wrote:
>> H. Peter Anvin wrote:
>>> Chris Snook wrote:
>>>> I pointed out NetApp's .snapshot directories because that's a method
>>>> that uses legal path character, but doesn't break anything. With this
>>>> method, userspace tools will have to be taught that : is suddenly a
>>>> special character.
>>> Not to mention that the character historically used for this purpose is
>>> ; (semicolon.)
>>
>> But that would cause havoc with shells which use ; to seperate commands.
>> Using ; would defiantly break userspace
>>
>> Jack
>>
>
> I can escape the semicolon just fine in bash. In fact, tab-completion
> will do this automatically. That's really a non-issue. It just means
> that anyone who wants to use this feature would have to know what
> they're doing, which I believe is your goal, right?
I didn't realise this. Would ; break userspace if it was used as the
delimiter?
This discussion may be academic as this design is looking less and less
useful/workable.
Jack
Jack Stone wrote:
> Chris Snook wrote:
>> Jack Stone wrote:
>>> The idea was that if you did an ls you would get the latest version of
>>> the file without the :revision_num. The only visible version would be
>>> the latest version, i.e. the current system would not change. The idea
>>> was that it would only show earlier versions if they were specifically
>>> requested with a :revision_num suffix. In that case the
>>> filesystem/kernel would need to recognise the suffix and return the
>>> earlier version of the file.
>>>
>>> The only userspace it would break is files with :num in their name, as I
>>> haven't seen any files like that I don't think its too big a problem but
>>> the way of specifiying revisions could be changed.
>>>
>>> Jack
>> I have one right now:
>>
>> $ ls /tmp/ksocket-csnook/kdeinit*
>> /tmp/ksocket-csnook/kdeinit__0 /tmp/ksocket-csnook/kdeinit-:0
>>
>> Note, I did not pass any special arguments to ls to make it pull up that
>> file. You'd have to modify ls to make it do that. You'd also need to
>> modify everything else out there. There are decades of programs out
>> there that would behave differently with the interface you propose.
>>
>> The more fundamental problem with your proposed interface is that it
>> treats a filesystem like an opaque server, instead of a transparent data
>> structure. You want files to be completely invisible to applications
>> that don't know about it, unless the user requests it. Unfortunately,
>> it doesn't work that way. Applications ask for a directory listing, and
>> will open the requested file if and only if the filename in question
>> appears in that listing. If you want to use this opaque server model,
>> you'd be better served putting it in some parallel file system (say,
>> /backup) that won't interfere with naive applications accessing the
>> mundane data. Personally, I like your idea of putting the older
>> versions in the same directory hierarchy, but I think you'd have to use
>> .foo hidden directories to do it right.
>
> The whole idea of the file system is that it wouldn't return the file in
> the file listing. The user would have to know that the file system was
> versioning to access the older versions as they would explicitly have to
> request them.
>
> Jack
>
Okay, so now you have to modify ls, cp, tar, and thousands of other applications
to be aware of the versioning, otherwise you can't use it.
Please don't get hung up on the interface. This is a really cool feature that
will require some serious engineering work to make it work right. There's no
need to reinvent hidden files as well, since we already have a decades-old
standard for that.
-- Chris
Chris Snook wrote:
> Okay, so now you have to modify ls, cp, tar, and thousands of other
> applications to be aware of the versioning, otherwise you can't use it.
>
> Please don't get hung up on the interface. This is a really cool
> feature that will require some serious engineering work to make it work
> right. There's no need to reinvent hidden files as well, since we
> already have a decades-old standard for that.
I realise that but the way this is designed has quickly become a pain in
the past. I think that it would be easier to implement somthing like
what was suggested in this email:
http://www.ussg.iu.edu/hypermail/linux/kernel/0706.2/1156.html
I'm thinking of writing a simple proof of concept version and then
seeing what people think from there.
Jack
Jack Stone wrote:
> Chris Snook wrote:
>> Jack Stone wrote:
>>> H. Peter Anvin wrote:
>>>> Chris Snook wrote:
>>>>> I pointed out NetApp's .snapshot directories because that's a method
>>>>> that uses legal path character, but doesn't break anything. With this
>>>>> method, userspace tools will have to be taught that : is suddenly a
>>>>> special character.
>>>> Not to mention that the character historically used for this purpose is
>>>> ; (semicolon.)
>>> But that would cause havoc with shells which use ; to seperate commands.
>>> Using ; would defiantly break userspace
>>>
>>> Jack
>>>
>> I can escape the semicolon just fine in bash. In fact, tab-completion
>> will do this automatically. That's really a non-issue. It just means
>> that anyone who wants to use this feature would have to know what
>> they're doing, which I believe is your goal, right?
>
> I didn't realise this. Would ; break userspace if it was used as the
> delimiter?
I have no idea. I've never written a file management utility or library, so I
don't know if they handle those specially.
> This discussion may be academic as this design is looking less and less
> useful/workable.
Well, I'd argue that the most interesting part of this idea is how it works on
the inside. You can implement arbitrarily impractical interfaces to test it out
as long as your code is modular enough to implement a community-agreeable
interface once it's ready for a wider audience.
-- Chris
>>>>> "Jack" == Jack Stone <[email protected]> writes:
Jack> The whole idea of the file system is that it wouldn't return the
Jack> file in the file listing. The user would have to know that the
Jack> file system was versioning to access the older versions as they
Jack> would explicitly have to request them.
So tell me what happens when I do:
touch foo:121212121212
John Stoffel wrote:
>>>>>> "Jack" == Jack Stone <[email protected]> writes:
>
> Jack> The whole idea of the file system is that it wouldn't return the
> Jack> file in the file listing. The user would have to know that the
> Jack> file system was versioning to access the older versions as they
> Jack> would explicitly have to request them.
>
> So tell me what happens when I do:
>
> touch foo:121212121212
>
>
<joke>
This fs crashes and burns because the interface still isn't correct.
</joke>
Honestly, I don't know and this will be the problem of any interfaces
that follow this format.
Jack
On Tue, Jun 19, 2007 at 04:34:42PM -0400, John Stoffel wrote:
> >>>>> "Jack" == Jack Stone <[email protected]> writes:
>
> Jack> The whole idea of the file system is that it wouldn't return the
> Jack> file in the file listing. The user would have to know that the
> Jack> file system was versioning to access the older versions as they
> Jack> would explicitly have to request them.
>
> So tell me what happens when I do:
>
> touch foo:121212121212
You create a new file called 'foo:121212121212'. If you then modify it,
you could access the old version as foo:121212121212:0.
(The .snapshot directory makes more sense than magic names, since you
can see what versions of a file are available without a special tool).
On Tue, Jun 19, 2007 at 02:03:07PM -0400, Chris Snook wrote:
> I pointed out NetApp's .snapshot directories because that's a method that
> uses legal path character, but doesn't break anything. With this method,
> userspace tools will have to be taught that : is suddenly a special
> character. Userspace already knows that files beginning with . are special
> and treat them specially. We don't need a new special character for every
> new feature. We've got one, and it's flexible enough to do what you want,
> as proven by NetApp's extremely successful implementation. Perhaps you
> want a slightly different interface from what NetApp has implemented, but
> what you're suggesting will change the default behavior of basic tools like
> tar and ls. This is not a good thing.
I think I used one of those systems once (or at least another one with
.snapshot feature). It managed to completely avoid user space problems
by never actually showing .snapshot in directory listings, but you could
always cd to it or refer to it explicitly. You never risked having tar
or find or anything else accidentally pick it up. Very nice interface.
--
Len Sorensen
Matthew> On Tue, Jun 19, 2007 at 04:34:42PM -0400, John Stoffel wrote:
>> >>>>> "Jack" == Jack Stone <[email protected]> writes:
>>
Jack> The whole idea of the file system is that it wouldn't return the
Jack> file in the file listing. The user would have to know that the
Jack> file system was versioning to access the older versions as they
Jack> would explicitly have to request them.
>>
>> So tell me what happens when I do:
>>
>> touch foo:121212121212
Matthew> You create a new file called 'foo:121212121212'. If you then
Matthew> modify it, you could access the old version as
Matthew> foo:121212121212:0.
Sure, I knew that, I was trying to push the boundaries a bit here with
magic version filenaming conventions to show that it won't scale.
Heck, trying to figure out what:
touch foo::::::::::::::::::::::
does would be interesting, and would the filesystem parsing code
handle that case?
Matthew> (The .snapshot directory makes more sense than magic names,
Matthew> since you can see what versions of a file are available
Matthew> without a special tool).
I agree 100%, it's a much better solution, though it has it's own
problems, esp over NFS and getting back to your original parent
directory properly can be painful.
John
On Tue, 19 Jun 2007 12:08:52 -0700
"H. Peter Anvin" <[email protected]> wrote:
> Chris Snook wrote:
> > I pointed out NetApp's .snapshot directories because that's a method
> > that uses legal path character, but doesn't break anything. With this
> > method, userspace tools will have to be taught that : is suddenly a
> > special character.
>
> Not to mention that the character historically used for this purpose is
> ; (semicolon.)
Yes but tdskb:foo.mac[1013,1013,frob];4 is *not* elegant. POSIX is very
clear about what is acceptable as magic in a pathname, and the unix spec
even more so. The NetApp approach recognizes two important things
1. Old version access is the oddity not the norm
2. Standards behaviour is important
Alan
On Tue, 19 Jun 2007, Lennart Sorensen wrote:
> On Tue, Jun 19, 2007 at 02:03:07PM -0400, Chris Snook wrote:
>> I pointed out NetApp's .snapshot directories because that's a method that
>> uses legal path character, but doesn't break anything. With this method,
>> userspace tools will have to be taught that : is suddenly a special
>> character. Userspace already knows that files beginning with . are special
>> and treat them specially. We don't need a new special character for every
>> new feature. We've got one, and it's flexible enough to do what you want,
>> as proven by NetApp's extremely successful implementation. Perhaps you
>> want a slightly different interface from what NetApp has implemented, but
>> what you're suggesting will change the default behavior of basic tools like
>> tar and ls. This is not a good thing.
>
> I think I used one of those systems once (or at least another one with
> .snapshot feature). It managed to completely avoid user space problems
> by never actually showing .snapshot in directory listings, but you could
> always cd to it or refer to it explicitly. You never risked having tar
> or find or anything else accidentally pick it up. Very nice interface.
since anything starting with . is considered a 'hidden' file per *nix
tradition it's ignored by many programs and optionally ignored by most
others (and anything that doesn't ignore . files when presending files to
the user has many other problems on modern desktop systems anyway ;-)
the only trouble I ever had with the .snapshot approach is when tar or
find would decend down into the .snapshot when I didn't really intend for
it to do so.
David Lang
Alan Cox wrote:
>
> Yes but tdskb:foo.mac[1013,1013,frob];4 is *not* elegant.
I think describing VMS pathname syntax as "not elegant" is kind of like
describing George W. Bush as "not a genius."
> POSIX is very
> clear about what is acceptable as magic in a pathname, and the unix spec
> even more so. The NetApp approach recognizes two important things
>
> 1. Old version access is the oddity not the norm
> 2. Standards behaviour is important
>
3. An atomic snapshot is more useful than a bunch of disconnected
per-file version. Kind of like CVS vs SVN.
-hpa
[email protected] wrote:
>
> the only trouble I ever had with the .snapshot approach is when tar or
> find would decend down into the .snapshot when I didn't really intend
> for it to do so.
>
Netapp optionally made .snapshot not show up in readdir, which solved
that problem.
I have a bigger issue with it starting with only one dot, which is
traditionally used for user configuration information. I think
..snapshot would have been a better choice, and by extension leaving
double-dot filenames as filesystem namespace (I know that contradicts
POSIX, but the only namespace available in POSIX is /../ which is a
global namespace.)
-hpa
On Tue, Jun 19, 2007 at 03:07:40PM -0700, [email protected] wrote:
> since anything starting with . is considered a 'hidden' file per *nix
> tradition it's ignored by many programs and optionally ignored by most
> others (and anything that doesn't ignore . files when presending files to
> the user has many other problems on modern desktop systems anyway ;-)
>
> the only trouble I ever had with the .snapshot approach is when tar or
> find would decend down into the .snapshot when I didn't really intend for
> it to do so.
That is why a readdir shouldn't even include .snapshot. Only explicitly
asking to open .snapshot should give it to you. That way tar can't
accidentally include it unless you specifically asked it to go into
.snapshot and archive the contents.
So by hidden it means completely hidden, not just left out by the shell
and the unix conventions.
Of course backup tools have to know about this, but then again backup
tools have to know about all sorts of low level filesystem stuff that
normal tools don't.
--
Len Sorensen
On Tue, Jun 19, 2007 at 03:13:33PM -0700, H. Peter Anvin wrote:
> [email protected] wrote:
> >
> > the only trouble I ever had with the .snapshot approach is when tar or
> > find would decend down into the .snapshot when I didn't really intend
> > for it to do so.
> >
>
> Netapp optionally made .snapshot not show up in readdir, which solved
> that problem.
>
> I have a bigger issue with it starting with only one dot, which is
> traditionally used for user configuration information. I think
> ..snapshot would have been a better choice, and by extension leaving
> double-dot filenames as filesystem namespace (I know that contradicts
> POSIX, but the only namespace available in POSIX is /../ which is a
> global namespace.)
Still, anything that depends on increasing the length of file or path
names to refer to different versions will encounter problems for long
file names and deep paths because there is an upper limit on file and
path name lengths.
It may be possible to use namespaces where an application can change the
view it (and it's children) will have on the storage by switching to a
different namespace and tagging it with for instance a specific version
or date we're interested in. Not sure if we actually pass namespace
information down to the file systems yet though, last time I checked
they were only visible for the VFS.
Jan
Jan Harkes wrote:
>
> Still, anything that depends on increasing the length of file or path
> names to refer to different versions will encounter problems for long
> file names and deep paths because there is an upper limit on file and
> path name lengths.
>
Then you have the Solaris variant -- rely on openat() flags to decend
into another namespace.
-hpa
>We don't need a new special character for every
>> new feature. We've got one, and it's flexible enough to do what you
want,
>> as proven by NetApp's extremely successful implementation.
I don't know NetApp's implementation, but I assume it is more than just a
choice of special character. If you merely start the directory name with
a dot, you don't fool anyone but 'ls' and shell wildcard expansion. (And
for some enlightened people like me, you don't even fool ls, because we
use the --almost-all option to show the dot files by default, having been
burned too many times by invisible files).
I assume NetApp flags the directory specially so that a POSIX directory
read doesn't get it. I've seen that done elsewhere.
The same thing, by the way, is possible with Jack's filename:version idea,
and I assumed that's what he had in mind. Not that that makes it all OK.
--
Bryan Henderson IBM Almaden Research Center
San Jose CA Filesystems
On Tue, 2007-06-19 at 16:35 -0700, Bryan Henderson wrote:
> >We don't need a new special character for every
> >> new feature. We've got one, and it's flexible enough to do what you
> want,
> >> as proven by NetApp's extremely successful implementation.
>
> I don't know NetApp's implementation, but I assume it is more than just a
> choice of special character. If you merely start the directory name with
> a dot, you don't fool anyone but 'ls' and shell wildcard expansion. (And
> for some enlightened people like me, you don't even fool ls, because we
> use the --almost-all option to show the dot files by default, having been
> burned too many times by invisible files).
>
> I assume NetApp flags the directory specially so that a POSIX directory
> read doesn't get it. I've seen that done elsewhere.
No. The directory is quite visible with a standard 'ls -a'. Instead,
they simply mark it as a separate volume/filesystem: i.e. the fsid
differs when you call stat(). The whole thing ends up acting rather like
our bind mounts.
It means that you avoid all those nasty user issues where people try to
hard link to/from .snapshot directories, rename files across snapshot
boundaries, etc.
Trond
On Jun 19, 2007, at 03:58:57, Bron Gondwana wrote:
> On Mon, Jun 18, 2007 at 11:10:42PM -0400, Kyle Moffett wrote:
>> On Jun 18, 2007, at 13:56:05, Bryan Henderson wrote:
>>>> The question remains is where to implement versioning: directly
>>>> in individual filesystems or in the vfs code so all filesystems
>>>> can use it?
>>>
>>> Or not in the kernel at all. I've been doing versioning of the
>>> types I described for years with user space code and I don't
>>> remember feeling that I compromised in order not to involve the
>>> kernel.
>>
>> What I think would be particularly interesting in this domain is
>> something similar in concept to GIT, except in a file-system:
>
> [...snip...]
>
> It can work, but there's one big pain at the file level: no mmap.
IMHO it's actually not that bad. The "gitfs" would divide larger
files up into manageable chunks (say 4MB) which could be quickly
SHA-1ed. When a file is mmapped and partially modified, the SHA-1
would be marked as locally invalid, but since mmap() loses most
consistency guarantees that's OK. A time or writeout based "commit"
scheme might still freeze, SHA-1, and write-out the page at regular
intervals without the program's knowledge, but since you only have to
SHA-1 the relatively-small 4MB chunk (which is about to hit disk
anyways), it's not a significant time penalty. Even if under memory
pressure and swapping data out to disk you don't have to update the
SHA-1 and create a new commit as long as you keep a reference to the
object stored in the volume header somewhere and maintain the "SHA-1
out-of-date" bit.
A program which carefully uses msync() would be fine, of course (with
proper configuration) as that would create a new commit as appropriate.
Since mmap() is poorly defined on network filesystems in the absence
of msync(), I don't see that such behaviour would be a problem. And
it certainly would be fine on local filesystems as there you can just
stuff the "SHA-1 out-of-date" bit and a reference to the parent
commit and path in the object itself. Then you just need to keep a
useful reference to that object in a table somewhere in the volume
and you're set.
> If you don't want to support mmap it can work reasonably happily,
> though you may want to keep your sha1 (or other digest) state as
> well as the final digest so you can cheaply calculate the digest
> for a small append without walking the entire file. You may also
> want to keep state checkpoints every so often along a big file so
> that truncates don't cost too much to recalculate.
That may be worth it even if the file is divided into 4MB chunks (or
other configurable value), but it would need benchmarking.
> Luckily in a userspace VFS that's only accessed via FTP and DAV we
> can support a limited set of operations (basically create, append,
> read, delete) You don't get that luxury for a general purpose
> filesystem, and that's the problem. There will always be
> particular usage patterns (especially something that mmaps or seeks
> and touches all over the place like a loopback mounted filesystem
> or a database file) that just dodn't work for file-level sha1s.
I'd think that loopback-mounted filesystems wouldn't be that difficult
1) Set the SHA-1 block size appropriately to divide the big file
into a bunch of little manageable files. Could conceivably be multi-
layered like directories, depending on the size of the file.
2) Mark the file as exempt from normal commits (IE: without
special syscalls or fsync/msync() on the file itself, it is never
updated in the tree objects.
3) Set up the loopback device to call the gitfs commit code when
it receives barriers or flushes from the parent filesystem.
And database files aren't a big issue. I have yet to see a networked
filesystem which you could stick a MySQL database on it from one node
and expect to get useful/recent read results from other nodes. If
you really wanted something like that for such a "gitfs", you could
just add code to MySQL to create a gitfs commit every N transactions
and not otherwise. The best part is: that would make online MySQL
backups from another node trivial! Just pick any arbitrary
appropriate commit object and mount that object, then "cp -a
mysql_db_dir mysql_backup_dir". That's not to say it wouldn't have a
performance penalty, but for some people the performance penalty
might be worth it.
Oh, and for those programs which want multi-master replication, this
makes it ten times easier:
1) Put each master-server on a different gitfs branch
2) Write your program as gitfs aware. Make it create gitfs
commits at appropriate times (so the data is accessible from other
nodes).
3) Come up with a useful non-interactive database-file merge
algorithm. Useful examples of different kinds of merge engines may
be found in the git project. This should take $BASE_VERSION,
$NEWVERSION1, $NEWVERSION2, and produce a $MERGEDVERSION. A good
algorithm should probably pick a safe default and save a "conflict"
entry in the face of conflicting changes.
4) Hook your merge algorithm into the gitfs mechanics using some
to-be-defined API.
5) Whenever your software does a database-file commit it sends
out a little notification to the other nodes (maybe using a gitfs API?)
6) Run a periodic (as defined by the admin yet again) thread on
each node which does branch merging. When two or more branches have
different SHA-1 sums the servers will rotate the merging task between
them. The thus-selected server will merge changes from the other
server(s) into its current working copy. With 2 servers this means
that the maximum delay between one server making a change and the
other server seeing it will be 2 times the merge interval.
7) For small pools of servers a simple rotated-merge-master
algorithm would work. For larger pools you would need to come up
with some logarithmic rotating-merge-node algorithm to evenly divide
the work of propagating changes across all nodes.
> It does have some lovely properties though. I'd enjoy working in
> an envionment that didn't look much like POSIX but had the strong
> guarantees and auditability that addressing by sha1 buys you.
I'd like to think we can have our cake and eat it too :-D. POSIX
requirements should be doable on the local system and can be mimiced
well enough on networked filesystems (albeit with update latency)
that most programs won't care. If you're the only person modifying
files on gitfs, regardless of what node they are stored on, it should
have the same behavior as local files (since with gitfs caching they
would *become* local files too :-D). The few programs that do care
about POSIX atomicity across networked filesystems (which is already
mostly implementation defined) could probably be updated to map gitfs
commits and merges into their own internal transactions and do just
fine.
Cheers,
Kyle Moffett
Trond Myklebust wrote:
>>
>> I assume NetApp flags the directory specially so that a POSIX directory
>> read doesn't get it. I've seen that done elsewhere.
>
> No. The directory is quite visible with a standard 'ls -a'. Instead,
> they simply mark it as a separate volume/filesystem: i.e. the fsid
> differs when you call stat(). The whole thing ends up acting rather like
> our bind mounts.
> It means that you avoid all those nasty user issues where people try to
> hard link to/from .snapshot directories, rename files across snapshot
> boundaries, etc.
>
Last I used a Netapp, it was configurable, I believe; I seem to also
vaguely remember that one could configure it so that it only was
accessible as part of a mount string rather than as part of an
already-mounted filesystem. Of course, this was a long time ago.
-hpa
On Mittwoch, 20. Juni 2007, H. Peter Anvin wrote:
> Alan Cox wrote:
> > POSIX is very
> > clear about what is acceptable as magic in a pathname, and the unix spec
> > even more so. The NetApp approach recognizes two important things
> >
> > 1. Old version access is the oddity not the norm
> > 2. Standards behaviour is important
>
> 3. An atomic snapshot is more useful than a bunch of disconnected
> per-file version. Kind of like CVS vs SVN.
I believe that since some decision must be made *when* a snapshot is taken,
and that should (has to) be done by userspace, a userspace versioning system
for doing the backups is the right solution. [1]
Whether there is some filesystem (FUSE or native) that allows online browsing
of the backups or not is another matter.
Ad 1: What userspace needs is
- atomic snapshots of complete directory trees, independent of mount
boundaries (across filesystems)
- an atomic way to change the state of the filesystem for the *whole* system.
For FSVS I'll try to use unionfs for that - populate some new directory with
my tree of changes, then overmount that over "/", and move the files over
one-by-one until the new directory is empty. (Must be checked on reboot, of
course).
These are actually two similar operations (from the atomic view), but have to
be done in completely different ways ... Maybe there could be some "better"
interface (if there is one - I don't know what could really be removed from
the above workflow).
Regards,
Phil
On Tue, 2007-06-19 at 20:12 +0100, Jack Stone wrote:
> H. Peter Anvin wrote:
> > Chris Snook wrote:
> >> I pointed out NetApp's .snapshot directories because that's a method
> >> that uses legal path character, but doesn't break anything. With this
> >> method, userspace tools will have to be taught that : is suddenly a
> >> special character.
> >
> > Not to mention that the character historically used for this purpose is
> > ; (semicolon.)
>
> But that would cause havoc with shells which use ; to seperate commands.
Then the user has to quote the ;. BTW `find` use a lone ';' since ages
to terminate it's '-exec' thingy.
> Using ; would defiantly break userspace
Only for buggy written shell scripts/commands. Who cares?
Bernd
--
Firmix Software GmbH http://www.firmix.at/
mobil: +43 664 4416156 fax: +43 1 7890849-55
Embedded Linux Development and Services
>The directory is quite visible with a standard 'ls -a'. Instead,
>they simply mark it as a separate volume/filesystem: i.e. the fsid
>differs when you call stat(). The whole thing ends up acting rather like
>our bind mounts.
Hmm. So it breaks user space quite a bit. By break, I mean uses that
work with more conventional filesystems stop working if you switch to
NetAp. Most programs that operate on directory trees willingly cross
filesystems, right? Even ones that give you an option, such as GNU cp,
don't by default.
But if the implementation is, as described, wildly successful, that means
users are willing to tolerate this level of breakage, so it could be used
for versioning too.
But I think I'd rather see a truly hidden directory for this (visible only
when looked up explicitly).
--
Bryan Henderson IBM Almaden Research Center
San Jose CA Filesystems
Bryan Henderson wrote:
>> The directory is quite visible with a standard 'ls -a'. Instead,
>> they simply mark it as a separate volume/filesystem: i.e. the fsid
>> differs when you call stat(). The whole thing ends up acting rather like
>> our bind mounts.
>
> Hmm. So it breaks user space quite a bit. By break, I mean uses that
> work with more conventional filesystems stop working if you switch to
> NetAp. Most programs that operate on directory trees willingly cross
> filesystems, right? Even ones that give you an option, such as GNU cp,
> don't by default.
>
> But if the implementation is, as described, wildly successful, that means
> users are willing to tolerate this level of breakage, so it could be used
> for versioning too.
>
> But I think I'd rather see a truly hidden directory for this (visible only
> when looked up explicitly).
>
When I administered a bunch of netapps I remember turning the visible
.snapshots off.
-hpa
Bryan Henderson wrote:
>> The directory is quite visible with a standard 'ls -a'. Instead,
>> they simply mark it as a separate volume/filesystem: i.e. the fsid
>> differs when you call stat(). The whole thing ends up acting rather like
>> our bind mounts.
>
> Hmm. So it breaks user space quite a bit. By break, I mean uses that
> work with more conventional filesystems stop working if you switch to
> NetAp. Most programs that operate on directory trees willingly cross
> filesystems, right? Even ones that give you an option, such as GNU cp,
> don't by default.
>
> But if the implementation is, as described, wildly successful, that means
> users are willing to tolerate this level of breakage, so it could be used
> for versioning too.
>
> But I think I'd rather see a truly hidden directory for this (visible only
> when looked up explicitly).
Well, if we're going to have super-secret hidden directories, we might as well
implement them in a namespace framework. Somebody is going to want generic
filesystem namespaces eventually, so having one unified mechanism for doing this
kind of thing will make it much easier, especially for userspace apps which
would need to be modified to be aware of them.
Personally, I'm happy with .snapshot and the like.
-- Chris
Interesting that you mention the multitude of file systems because
I was very surprised to see NILFS being promoted in the latest Linux
Magazine but no mention of the other more important file systems
currently in work like UnionFS ChunkFS or ext4 so publisized.
I can say I was disapointed of the article. I still didn't
see any real prove that NILFS is the best file system since bread.
Neither I see any comments on nilfs from Andrew and others and
yet this is the best new file system coming to Linux. Maybe I missed
something that happened in Ottawa.
/Sorin
On Mon, 18 Jun 2007 05:45:24 -0400, Andreas Dilger <[email protected]>
wrote:
> On Jun 16, 2007 16:53 +0200, J?rn Engel wrote:
>> On Fri, 15 June 2007 15:51:07 -0700, alan wrote:
>> > >Thus, in the end it turns out that this stuff is better handled by
>> > >explicit version-control systems (which require explicit operations
>> to
>> > >manage revisions) and atomic snapshots (for backup.)
>> >
>> > ZFS is the cool new thing in that space. Too bad the license makes it
>> > hard to incorporate it into the kernel.
>>
>> It may be the coolest, but there are others as well. Btrfs looks good,
>> nilfs finally has a cleaner and may be worth a try, logfs will get
>> snapshots sooner or later. Heck, even my crusty old cowlinks can be
>> viewed as snapshots.
>>
>> If one has spare cycles to waste, working on one of those makes more
>> sense than implementing file versioning.
>
> Too bad everyone is spending time on 10 similar-but-slightly-different
> filesystems. This will likely end up with a bunch of filesystems that
> implement some easy subset of features, but will not get polished for
> users or have a full set of features implemented (e.g. ACL, quota, fsck,
> etc). While I don't think there is a single answer to every question,
> it does seem that the number of filesystem projects has climbed lately.
>
> Maybe there should be a BOF at OLS to merge these filesystem projects
> (btrfs, chunkfs, tilefs, logfs, etc) into a single project with multiple
> people working on getting it solid, scalable (parallel readers/writers on
> lots of CPUs), robust (checksums, failure localization), recoverable,
> etc.
> I thought Val's FS summits were designed to get developers to
> collaborate,
> but it seems everyone has gone back to their corners to work on their own
> filesystem?
>
> Working on getting hooks into DM/MD so that the filesystem and RAID
> layers
> can move beyond "ignorance is bliss" when talking to each other would be
> great. Not rebuilding empty parts of the fs, limit parity resync to
> parts
> of the fs that were in the previous transaction, use fs-supplied
> checksums
> to verify on-disk data is correct, use RAID geometry when doing
> allocations,
> etc.
>
> Cheers, Andreas
> --
> Andreas Dilger
> Principal Software Engineer
> Cluster File Systems, Inc.
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-fsdevel"
> in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
>
--
Best Regards
Sorin Faibish
Senior Technologist
Senior Consulting Software Engineer Network Storage Group
EMC?
where information lives
Phone: 508-435-1000 x 48545
Cellphone: 617-510-0422
Email : [email protected]