Martin J. Bligh wrote:
>> So there are many edits that needed to be done in lots of
>> Kconfig and Makefiles if one selectively pulls or omits certain
>> sub-directories.
>
> Indeed, I ran across the same thing a while back. Would be *really* nice to
> fix, if only so some poor sod over a modem can download a smaller tarball,
> or save some diskspace.
I have seven source trees on disk right now. Getting rid off all
the archs but i386 would not only save tons of space, it would also
make 'grep -r' go faster and stop spewing irrelevant hits for archs
that I couldn't care less about.
------
Chuck
>>> So there are many edits that needed to be done in lots of
>>> Kconfig and Makefiles if one selectively pulls or omits certain
>>> sub-directories.
>>
>> Indeed, I ran across the same thing a while back. Would be *really* nice
>> to fix, if only so some poor sod over a modem can download a smaller
>> tarball, or save some diskspace.
>
> I have seven source trees on disk right now. Getting rid off all
> the archs but i386 would not only save tons of space, it would also
> make 'grep -r' go faster and stop spewing irrelevant hits for archs
> that I couldn't care less about.
Indeed. But whilst you're waiting, hardlink everything together, and
patch the differences (patch knows how to break hardlinks). Make a
script that cp -lR's the tree to another copy (normally takes < 1s), and
then remove the other arches. grep that.
cscope with prebuilt indeces on a filtered subset of the files may well do
better than grep, depending on exactly what you're doing (does 99% of it
for me). Don't use the cscope in Debian Woody, it's broken.
M.
On Thu, May 01, 2003 at 07:06:37AM -0700, Martin J. Bligh wrote:
> Indeed. But whilst you're waiting, hardlink everything together, and
> patch the differences (patch knows how to break hardlinks). Make a
> script that cp -lR's the tree to another copy (normally takes < 1s), and
> then remove the other arches. grep that.
I agree with Martin here, I always use hardlinks, and when I have too many
kernel trees, I even recompact them by diff/rm/cp -l/patch to get as small
differences as possible. You can have tens of kernels in less than 400 MB,
and tools such as diff and grep are really fast because it's easy to keep
several kernels in the cache.
The only danger is to modify several files at once with stupid operations such
as "cat $file.help >> Documentation/Configure.help" which are sometimes
included in some scripts. It would be cool to be able to lock the source, but
I never found how (perhaps I should try chattr+i ?). And I don't know how to
force vi and emacs to unlink before saving, so I have to be careful before
certain operations. But all in all, it's extremely useful.
Cheers,
Willy
>> Indeed. But whilst you're waiting, hardlink everything together, and
>> patch the differences (patch knows how to break hardlinks). Make a
>> script that cp -lR's the tree to another copy (normally takes < 1s), and
>> then remove the other arches. grep that.
>
> I agree with Martin here, I always use hardlinks, and when I have too many
> kernel trees, I even recompact them by diff/rm/cp -l/patch to get as small
> differences as possible. You can have tens of kernels in less than 400 MB,
> and tools such as diff and grep are really fast because it's easy to keep
> several kernels in the cache.
>
> The only danger is to modify several files at once with stupid operations
> such as "cat $file.help >> Documentation/Configure.help" which are
> sometimes included in some scripts. It would be cool to be able to lock
> the source, but I never found how (perhaps I should try chattr+i ?). And
> I don't know how to force vi and emacs to unlink before saving, so I have
> to be careful before certain operations. But all in all, it's extremely
> useful.
find -type f | xargs chmod ugo-w
whenever you make a new copy seems to work pretty well to me.
Then you use "dupvi" to edit the files, which is just a little wrapper that
breaks the link, and edits the file.
For added paranoia, I suppose you could make your "main" views (eg the
unpatched ones) owned by another user. But I've never had a problem with
just chmod, and I have a lot of views ... 1689 all linked together ;-)
-r--r--r-- 1689 fletch fletch 18691 Nov 17 20:29 COPYING
Oh, and diff of views takes < 1s (diff understands hardlinks too, it seems).
Any SCM can kiss my ass ;-)
M.
On Thu, May 01, 2003 at 07:35:48AM -0700, Martin J. Bligh wrote:
> >> Indeed. But whilst you're waiting, hardlink everything together, and
> >> patch the differences (patch knows how to break hardlinks). Make a
> >> script that cp -lR's the tree to another copy (normally takes < 1s), and
> >> then remove the other arches. grep that.
> >
> > I agree with Martin here, I always use hardlinks, and when I have too many
> > kernel trees, I even recompact them by diff/rm/cp -l/patch to get as small
> > differences as possible. You can have tens of kernels in less than 400 MB,
> > and tools such as diff and grep are really fast because it's easy to keep
> > several kernels in the cache.
> >
> > The only danger is to modify several files at once with stupid operations
> > such as "cat $file.help >> Documentation/Configure.help" which are
> > sometimes included in some scripts. It would be cool to be able to lock
> > the source, but I never found how (perhaps I should try chattr+i ?). And
> > I don't know how to force vi and emacs to unlink before saving, so I have
> > to be careful before certain operations. But all in all, it's extremely
> > useful.
>
> find -type f | xargs chmod ugo-w
that's also what I do when I don't trust a script ;-)
> whenever you make a new copy seems to work pretty well to me.
> Then you use "dupvi" to edit the files, which is just a little wrapper that
> breaks the link, and edits the file.
I didn't know about dupvi. But I admit that it's really easy to break the link
before starting vi, to ensure there's no problem.
> For added paranoia, I suppose you could make your "main" views (eg the
> unpatched ones) owned by another user.
this could help emacs because it tries every possibility to save a file, even
changing its rights if there's no other way.
> But I've never had a problem with just chmod, and I have a lot of views ...
> 1689 all linked together ;-)
>
> -r--r--r-- 1689 fletch fletch 18691 Nov 17 20:29 COPYING
and I thought it was dirty when I began to reach 50 links.... :-)
Cheers,
Willy
On Thu, May 01, 2003 at 07:35:48AM -0700, Martin J. Bligh wrote:
> just chmod, and I have a lot of views ... 1689 all linked together ;-)
>
> -r--r--r-- 1689 fletch fletch 18691 Nov 17 20:29 COPYING
That's a bunch. Who's fletch?
And more importantly, how do you keep track of what is in each of those?
I can see having 20, 100, whatever, and keeping it straight in your head
but 1600?
> Oh, and diff of views takes < 1s (diff understands hardlinks too, it seems).
> Any SCM can kiss my ass ;-)
Kiss, kiss :)
Ted T'so made us support hard links for the revision control files for the
same reasons and it works pretty well. We haven't extended that to the
checked out files because I'm nervous about tools which don't break the
links.
On the other hand, we could hard link the checked out files if they
were checked out read-only which mimics what you are doing with the
chmod... That's a thought.
We'll still never be as fast as a pure hardlinked tree, that's balls to
the wall as fast as you can go as far as I can tell.
--
---
Larry McVoy lm at bitmover.com http://www.bitmover.com/lm
>> -r--r--r-- 1689 fletch fletch 18691 Nov 17 20:29 COPYING
>
> That's a bunch. Who's fletch?
>
> And more importantly, how do you keep track of what is in each of those?
> I can see having 20, 100, whatever, and keeping it straight in your head
> but 1600?
I have one view for every patch, and a bunch of scripts to manage them,
tear them down, build them up, and create patches from all of them in one
dir (they're a numbered sequence). A decent tree structure helps ;-)
It helps me to keep all the patches separated out - I want to be able to
carry forward 100 patches (at least) in sequence against the mainline tree,
and keep them all separate. Totally different problem from the one Linus
has, IMHO.
I guess I have a view for what you call a changeset ... AFAICS, if you just
take 100 stacked patches, and do a merge forward through 30 versions, you
just end up with a big mess that you can't extract the real "changes"
you're making back out from the main view. But I've never really tried, it
might work out in BK or something I suppose. If that worked (in ~1 minute
for 100 patches), I'd be very tempted to try it (I hate learning curves for
new tools, half the time they're just burnt time).
> We'll still never be as fast as a pure hardlinked tree, that's balls to
> the wall as fast as you can go as far as I can tell.
Ow ;-)
M.
On Thu, May 01, 2003 at 07:54:03AM -0400, Chuck Ebbert wrote:
> Martin J. Bligh wrote:
>
> >> So there are many edits that needed to be done in lots of
> >> Kconfig and Makefiles if one selectively pulls or omits certain
> >> sub-directories.
> >
> > Indeed, I ran across the same thing a while back. Would be *really* nice to
> > fix, if only so some poor sod over a modem can download a smaller tarball,
> > or save some diskspace.
>
> I have seven source trees on disk right now. Getting rid off all
> the archs but i386 would not only save tons of space, it would also
> make 'grep -r' go faster and stop spewing irrelevant hits for archs
> that I couldn't care less about.
>
>
> ------
> Chuck
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
I agree with you. Making different trees for different archs will make the tarball much smaller. Usually people only use one architecture and the other code lies waste. I think this has been discussed many times but It really is worth doing.
--
Balram Adlakha wrote:
> On Thu, May 01, 2003 at 07:54:03AM -0400, Chuck Ebbert wrote:
>> I have seven source trees on disk right now. Getting rid off all
>>the archs but i386 would not only save tons of space, it would also
>>make 'grep -r' go faster and stop spewing irrelevant hits for archs
>>that I couldn't care less about.
>
> I agree with you. Making different trees for different archs will make the tarball much smaller. Usually people only use one architecture and the other code lies waste. I think this has been discussed many times but It really is worth doing.
How about a script to just prune it once you download it. That will at least fix your
disk space & grep issue, and will not affect those of us who like to see it all.
If you want to save download bandwidth, just use incremental diffs and/or something
like bk or one of the cvs exports.
Ben
> --
--
Ben Greear <[email protected]> <Ben_Greear AT excite.com>
President of Candela Technologies Inc http://www.candelatech.com
ScryMUD: http://scry.wanfear.com http://scry.wanfear.com/~greear
> >> I have seven source trees on disk right now. Getting rid off all
> >>the archs but i386 would not only save tons of space, it would also
> >>make 'grep -r' go faster and stop spewing irrelevant hits for archs
> >>that I couldn't care less about.
>
> >
> > I agree with you. Making different trees for different archs will make the tarball much smaller. Usually people only use one architecture and the other code lies waste. I think this has been discussed many times but It really is worth doing.
>
> How about a script to just prune it once you download it. That will at least fix your
> disk space & grep issue, and will not affect those of us who like to see it all.
Agreed - we don't want to obfuscate getting the whole kernel tree for
anybody who wants to - some of us have loads of bandwidth, and do
compile for multiple architectures :-).
> If you want to save download bandwidth, just use incremental diffs and/or something
> like bk or one of the cvs exports.
Even the current 2.5 source tree is downloadable during a, (long),
lunch break over ISDN, (or a very long lunch break over a 56K modem
:-) ).
John.