2001-03-02 19:04:16

by William Stearns

[permalink] [raw]
Subject: [OFFTOPIC] Hardlink utility - reclaim drive space

Good day, all,
Sorry for the offtopic post; I sincerely believe this will be
useful to developers with multiple copies of, say, the linux kernel tree
on their drives. I'll be brief. Please followup to private mail -
thanks.
Freedups scans the directories you give it for identical files and
hardlinks them together to save drive space. Please see
ftp://ftp.stearns.org/pub/freedups . V0.2.1 is up there; it has received
some testing, but may yet contain bugs.
I was able to recover ~676M by running it against 8 different
2.4.x kernel trees with different patches that originally contained ~948M
of files. YMMV.
I do understand there are better ways to handle this problem (cp
-av --link, cvs? Bitkeeper, deleting unneeded trees, tarring up trees,
etc.). See the readme for a little discussion on this. This is just one
approach that may be useful in some situations.
Cheers,
- Bill

---------------------------------------------------------------------------
"Software is largely a service industry operating under the
persistent but unfounded delusion that it is a manufacturing industry."
-- Eric Raymond
--------------------------------------------------------------------------
William Stearns ([email protected]). Mason, Buildkernel, named2hosts,
and ipfwadm2ipchains are at: http://www.pobox.com/~wstearns
LinuxMonth; articles for Linux Enthusiasts! http://www.linuxmonth.com
--------------------------------------------------------------------------


2001-03-05 19:17:59

by Padraig Brady

[permalink] [raw]
Subject: Re: [OFFTOPIC] Hardlink utility - reclaim drive space

Hmm.. useful until you actually want to modify a linked file,
but then your modifying the file in all "merged" trees.
Wouldn't it be cool to have an extended attribute
for files called "Copy on Write", so then you could
hardlink all duplicate files together, but when a file is
modified a copy is transparently created.
Actually should it be called "Copy On Modify"? since if
you copied a file there would be no need to make an actual
copy until the file was modified.

The only problem I see with this is that you wouldn't have
enough space to store a copy of a file, what would you do
in this case, just return an error on write?

Is there any way this could be extended across filesystems?
I suppose you could add it on top of existing DFS'?

I could see many uses for this, like backup systems, but perhaps
a block level system is more appropriate in this case?
(like the just announced SnapFS).

Is there any filesystem that supports this @ present?

Padraig.

William Stearns wrote:

> Good day, all,
> Sorry for the offtopic post; I sincerely believe this will be
> useful to developers with multiple copies of, say, the linux kernel tree
> on their drives. I'll be brief. Please followup to private mail -
> thanks.
> Freedups scans the directories you give it for identical files and
> hardlinks them together to save drive space. Please see
> ftp://ftp.stearns.org/pub/freedups . V0.2.1 is up there; it has received
> some testing, but may yet contain bugs.
> I was able to recover ~676M by running it against 8 different
> 2.4.x kernel trees with different patches that originally contained ~948M
> of files. YMMV.
> I do understand there are better ways to handle this problem (cp
> -av --link, cvs? Bitkeeper, deleting unneeded trees, tarring up trees,
> etc.). See the readme for a little discussion on this. This is just one
> approach that may be useful in some situations.
> Cheers,
> - Bill

2001-03-05 19:25:19

by Jeremy Jackson

[permalink] [raw]
Subject: Re: [OFFTOPIC] Hardlink utility - reclaim drive space

Padraig Brady wrote:

> Hmm.. useful until you actually want to modify a linked file,
> but then your modifying the file in all "merged" trees.
> Wouldn't it be cool to have an extended attribute
> for files called "Copy on Write", so then you could
> hardlink all duplicate files together, but when a file is
> modified a copy is transparently created.
> Actually should it be called "Copy On Modify"? since if
> you copied a file there would be no need to make an actual
> copy until the file was modified.
>
> The only problem I see with this is that you wouldn't have
> enough space to store a copy of a file, what would you do
> in this case, just return an error on write?
>
> Is there any way this could be extended across filesystems?
> I suppose you could add it on top of existing DFS'?
>
> I could see many uses for this, like backup systems, but perhaps
> a block level system is more appropriate in this case?
> (like the just announced SnapFS).
>
> Is there any filesystem that supports this @ present?
>
> Padraig.
>
> William Stearns wrote:
>
> > Good day, all,
> > Sorry for the offtopic post; I sincerely believe this will be
> > useful to developers with multiple copies of, say, the linux kernel tree
> > on their drives. I'll be brief. Please followup to private mail -
> > thanks.
> > Freedups scans the directories you give it for identical files and
> > hardlinks them together to save drive space. Please see
> > ftp://ftp.stearns.org/pub/freedups . V0.2.1 is up there; it has received
> > some testing, but may yet contain bugs.
> > I was able to recover ~676M by running it against 8 different
> > 2.4.x kernel trees with different patches that originally contained ~948M
> > of files. YMMV.
> > I do understand there are better ways to handle this problem (cp
> > -av --link, cvs? Bitkeeper, deleting unneeded trees, tarring up trees,
> > etc.). See the readme for a little discussion on this. This is just one
> > approach that may be useful in some situations.
> > Cheers,
> > - Bill

snapFS might handle this - versioning, copy-on-write disk file
clones... even finer grained: only modified blocks of a file are
duplicated, not the entire file, and it does this in real-time.

in the case of kernel, why not get the whole repository?
CVS stores versions as diffs internally, saving space.

2001-03-05 22:09:17

by David Schleef

[permalink] [raw]
Subject: Re: [OFFTOPIC] Hardlink utility - reclaim drive space

On Mon, Mar 05, 2001 at 07:17:18PM +0000, Padraig Brady wrote:
> Hmm.. useful until you actually want to modify a linked file,
> but then your modifying the file in all "merged" trees.

Use emacs, because you can configure it to do something
appropriate with linked files. But for those of us addicted
to vi, the attached wrapper script is pretty cool, too.





dave...


Attachments:
(No filename) (373.00 B)
cow-wrapper (707.00 B)
Download all attachments

2001-03-06 13:58:02

by Padraig Brady

[permalink] [raw]
Subject: Re: [OFFTOPIC] Hardlink utility - reclaim drive space

Jeremy Jackson wrote:

> Padraig Brady wrote:
>
>> Hmm.. useful until you actually want to modify a linked file,
>> but then your modifying the file in all "merged" trees.
>> Wouldn't it be cool to have an extended attribute
>> for files called "Copy on Write", so then you could
>> hardlink all duplicate files together, but when a file is
>> modified a copy is transparently created.
>> Actually should it be called "Copy On Modify"? since if
>> you copied a file there would be no need to make an actual
>> copy until the file was modified.
>>
>> The only problem I see with this is that you wouldn't have
>> enough space to store a copy of a file, what would you do
>> in this case, just return an error on write?
>>
>> Is there any way this could be extended across filesystems?
>> I suppose you could add it on top of existing DFS'?
>>
>> I could see many uses for this, like backup systems, but perhaps
>> a block level system is more appropriate in this case?
>> (like the just announced SnapFS).
>>
>> Is there any filesystem that supports this @ present?
>>
>> Padraig.
>>
>> William Stearns wrote:
>>
>>> Good day, all,
>>> Sorry for the offtopic post; I sincerely believe this will be
>>> useful to developers with multiple copies of, say, the linux kernel tree
>>> on their drives. I'll be brief. Please followup to private mail -
>>> thanks.
>>> Freedups scans the directories you give it for identical files and
>>> hardlinks them together to save drive space. Please see
>>> ftp://ftp.stearns.org/pub/freedups . V0.2.1 is up there; it has received
>>> some testing, but may yet contain bugs.
>>> I was able to recover ~676M by running it against 8 different
>>> 2.4.x kernel trees with different patches that originally contained ~948M
>>> of files. YMMV.
>>> I do understand there are better ways to handle this problem (cp
>>> -av --link, cvs? Bitkeeper, deleting unneeded trees, tarring up trees,
>>> etc.). See the readme for a little discussion on this. This is just one
>>> approach that may be useful in some situations.
>>> Cheers,
>>> - Bill
>>
> snapFS might handle this - versioning, copy-on-write disk file
> clones... even finer grained: only modified blocks of a file are
> duplicated, not the entire file, and it does this in real-time.

Yes I mentioned snapFS above, and a block level system would
be a win for large files, that are quite similar. However
in my experience this is usually not the case, i.e. large files
are usually not similar, so a simple file level system would be
more appropriate im my opinion.

Also I don't think user space progs should be relied on to manage
hardlinked files by (effectively) doing:

cp orig temp; mv temp orig

You could use file permissions (chmod -w orig) to remind you to do this,
but that's just a kludge, and also it's messy with every user space
prog doing something different.

Also the cp above breaks the link, which you wouldn't want to do
until the file is actually modified. So if you implemented the
"Copy On modify" extended attribute, you could set cp to cp -l by default.

I'm talking about something more general here than working with a few
similar trees. This is a general way to never have duplicate files on a
filesystem. Doing this at the block level would be more fine grained,
but at the cost of much more complexity and processing time, especially
if you want to analyse an existing filesystem.
If you do it @ the file level, you can just scan for duplicate files,
merge them using hardlinks, and set the "Copy On Modify" bit. This
can be cleared of course as appropriate, where you want the origonal
hardlink behaviour.

> in the case of kernel, why not get the whole repository?
> CVS stores versions as diffs internally, saving space.

Yep good for kernel where there are no binaries, but
not good in general.

Padraig.

2001-03-07 14:53:45

by David Woodhouse

[permalink] [raw]
Subject: Re: [OFFTOPIC] Hardlink utility - reclaim drive space


[email protected] said:
> Wouldn't it be cool to have an extended attribute for files called
> "Copy on Write", so then you could hardlink all duplicate files
> together, but when a file is modified a copy is transparently created.

> The only problem I see with this is that you wouldn't have enough
> space to store a copy of a file, what would you do in this case, just
> return an error on write?

Yep. write(2) is allowed to return -ENOSPC, even when you're not extending
the file you're writing to. Think about holes and log-structured
filesystems.

--
dwmw2