2001-12-18 05:21:16

by Eyal Sohya

[permalink] [raw]
Subject: The direction linux is taking

I've watched this List and have some questions to ask
which i would appreciate are answered. Some might not
have definite answers and we might be divided on them.

1. Are we satisfied with the source code control system ?
2. Is there enough planning for documentation ? As another
poster mentioned, there are new API and we dont know about
them.
3. There is no central bug tracking database. At least people
should know the status of the bugs they have found with some
releases.
4. Aggressive nature of this mailing list itself may be a
turn off to many who would like to contribute.



_________________________________________________________________
MSN Photos is the easiest way to share and print your photos:
http://photos.msn.com/support/worldwide.aspx


2001-12-18 06:10:01

by Craig Christophel

[permalink] [raw]
Subject: Re: The direction linux is taking

The aggressive nature of the list is a result of people who have all spent a
great deal of time researching kernel internals. It is much akin to a thesis
proposal and defense that you would see in an educational environment. It
may not be the most comfortable thing in the world, but it gets the base
issues resolved, because if you do not know what is going on someone WILL
tell you. -- and then you revise and defend. think of it as an academic
discussion forum -- where people (usually) have the right to sound off.
There is normally no harm done, although the recent MM discussions have been
a bit heated.



Craig.



On Tuesday 18 December 2001 00:20, Eyal Sohya wrote:
> I've watched this List and have some questions to ask
> which i would appreciate are answered. Some might not
> have definite answers and we might be divided on them.
>
> 1. Are we satisfied with the source code control system ?
> 2. Is there enough planning for documentation ? As another
> poster mentioned, there are new API and we dont know about
> them.
> 3. There is no central bug tracking database. At least people
> should know the status of the bugs they have found with some
> releases.
> 4. Aggressive nature of this mailing list itself may be a
> turn off to many who would like to contribute.
>
>
>
> _________________________________________________________________
> MSN Photos is the easiest way to share and print your photos:
> http://photos.msn.com/support/worldwide.aspx
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/

2001-12-18 12:20:31

by Rik van Riel

[permalink] [raw]
Subject: Re: The direction linux is taking

On Tue, 18 Dec 2001, Eyal Sohya wrote:

> 1. Are we satisfied with the source code control system ?

There is no source control.

> 2. Is there enough planning for documentation ? As another
> poster mentioned, there are new API and we dont know about
> them.

Documentation patches usually get dropped on the floor,
so writing documentation isn't really worth it most of
the time. Documentation really only works if it's written
together with the code and sent in the same patch.

> 3. There is no central bug tracking database. At least people
> should know the status of the bugs they have found with some
> releases.
> 4. Aggressive nature of this mailing list itself may be a
> turn off to many who would like to contribute.

Tough. If you can't deal with this you're better off using
a kernel package from your favourite Linux distribution.

cheers,

Rik
--
DMCA, SSSCA, W3C? Who cares? http://thefreeworld.net/

http://www.surriel.com/ http://distro.conectiva.com/

2001-12-18 14:38:37

by M. Edward Borasky

[permalink] [raw]
Subject: Re: The direction linux is taking

On Tue, 18 Dec 2001, Eyal Sohya wrote:

> I've watched this List and have some questions to ask
> which i would appreciate are answered. Some might not
> have definite answers and we might be divided on them.

My opinions only!!


> 1. Are we satisfied with the source code control system ?

With CVS, probably -- it's open source and rather universally known.
With the version control *process* ... well ... I personally favor a
full SEI CMM level 2 or even level 3 process. Whether there are open
source tools to facilitate that process is another story.

> 2. Is there enough planning for documentation ? As another poster
> mentioned, there are new API and we dont know about them.

There is, as it turns out, a tremendous *amount* of documentation,
although it is not as centralized as it could be. Again, I favor the SEI
CMM model.

> 3. There is no central bug tracking database. At least people should
> know the status of the bugs they have found with some releases.

Absolutely! Bug tracking and source / version control ought to be
integrated and centralized.

> 4. Aggressive nature of this mailing list itself may be a turn off to
> many who would like to contribute.

Well ... peer review / code walkthroughs are part of SEI CMM level 3
IIRC, and peer review is an important part of the scientific process. We
all have our opinions and our reasons for being here and levels of
contribution we are willing and able to make. When all is said and done,
more is said than done :)). A lot *is* getting done! The only things I
would change about this list are a reliable digest, a *vastly* better
search engine and a better mailing list manager than majordomo.

--
Ed Borasky [email protected] http://www.borasky-research.net

Give me your brains or I'll blow your money out.

2001-12-18 14:33:18

by Dana Lacoste

[permalink] [raw]
Subject: RE: The direction linux is taking

> 1. Are we satisfied with the source code control system ?

Yes. Alan (2.2) and Marcelo (2.4) and Linus (2.5) are doing
a good job with source control.

The fact the 'source control' is a person and not a piece
of software is irrelevant.

> 2. Is there enough planning for documentation ? As another
> poster mentioned, there are new API and we dont know about
> them.

Although this seems annoying, it's just one facet of the
primary difference between Linux and a commercially based
kernel : if you want to know how something works and how
it's being developed, then you MUST participate, in this
and other mailing lists.

I, for example, don't particularly care about the structures
that define the block interfaces in the kernel : I don't use
them. I do, however, care about networking things, so I follow
linux-kernel and linux-net (and several other lists) to make
sure I'm up to date. I am applying an inherent trust in the
people developing the block code, trusting that they will do
a good job and have a stable platform for my networking needs :)

> 3. There is no central bug tracking database. At least people
> should know the status of the bugs they have found with some
> releases.

There is no central product, so there can be no central bug track.
(see below)

> 4. Aggressive nature of this mailing list itself may be a
> turn off to many who would like to contribute.

You're missing something again.

I think this is a FAQ, so maybe we can develop a form response
here. Feel free to edit the following :

What is Linux? (The LKML definition)
Linux is a free, open source kernel that is used by many people
as the center for their operating system. The operating system
as a whole is NOT Linux. Linux is just the kernel.

"RedHat Linux" is an example of an entire operating system that
uses the Linux kernel and adds lots of other software around it
to make an entire operating system.

Similarly, Lineo makes an embedded product that starts with the
same kernel code. It doesn't target ANY of the same users that
RedHat Linux targets, but that doesn't make it any less significant.

Why is this distinction important? Because in LKML we are not
trying to define the way that the kernel is used, we are not
trying to take over the desktop world, we are not trying to
take over the supercomputing world, and we are not trying to
become the next microsoft. We are trying to make the best
kernel available, and that means that we support dozens of
different hardware platforms and thousands of different
operating environments.

LKML is a place where lots of developers who work on the Linux
kernel talk about different things (usually) pertaining to the
kernel source code and ways of improving it, by bug fixing, by
feature additions, or by code/API re-writing.

If you've worked in a pure development environment then you have
probably observed that people with ideas can become quite vocal
when defending their ideas, and because LKML is email based we
probably seem to be more, ah, vocal than most. If you can't
handle this kind of environment, then stick to Kernel Traffic.
(http://kt.zork.net/)

Does that answer your question?

Dana Lacoste
Linux Developer
Ottawa, Canada

2001-12-18 14:54:54

by Alan

[permalink] [raw]
Subject: Re: The direction linux is taking

> > 1. Are we satisfied with the source code control system ?
>
> Yes. Alan (2.2) and Marcelo (2.4) and Linus (2.5) are doing
> a good job with source control.

Not really. We do a passable job. Stuff gets dropped, lost,
deferred and forgotten, applied when it conflicts with other work
- much of this stuff that software wouldnt actually improve on over a
person

> Although this seems annoying, it's just one facet of the
> primary difference between Linux and a commercially based
> kernel : if you want to know how something works and how
> it's being developed, then you MUST participate, in this
> and other mailing lists.

That wont help you - most discussion occurs in private because l/k
is too noisy and many key people dont read it.

> > 3. There is no central bug tracking database. At least people
> > should know the status of the bugs they have found with some
> > releases.
>
> There is no central product, so there can be no central bug track.
> (see below)

Rubbish. Ask the engineering world about fault tracking. You won't get
"different products no central flaw tracking" you'll get extensive cross
correlation, statistical tools and the like in any syste, where reliability
matters

Many kernel bug reports end up invisible to some of the developers.

Alan

2001-12-18 15:14:05

by Dead2

[permalink] [raw]
Subject: Re: The direction linux is taking

> > > 1. Are we satisfied with the source code control system ?
> >
> > Yes. Alan (2.2) and Marcelo (2.4) and Linus (2.5) are doing
> > a good job with source control.
>
> Not really. We do a passable job. Stuff gets dropped, lost,
> deferred and forgotten, applied when it conflicts with other work
> - much of this stuff that software wouldnt actually improve on over a
> person

What about having the Linux source code in a CVS tree where active/trusted
driver-/module-maintainers are granted write access, and everyone else read
access.
After the patches are applied, people will test them out, and bugfixes will
be applied when bugs are detected.
Then, when the kernel-maintainer feels this or that sourcecode is ready for a
.pre kernel, he puts it in the main kernel tree.
(This would indeed pose a security risk, but who in their right mind would run
a CVS snapshot on anything important, that's right _noone_ in their _right
mind_)

Of course this would require much maintenance, and possibly more than
one kernel-maintainer. This because of how much easier it would become
for driver-/module-maintainers to apply patches they believe would make
things better. Cleanups would also be necessary from time to time..
(cleanups = making the CVS identical to the main kernel tree again)

Just my 2 cents..

-=Dead2=-




2001-12-18 15:19:25

by Dana Lacoste

[permalink] [raw]
Subject: RE: The direction linux is taking

> > > 1. Are we satisfied with the source code control system ?

> > Yes. Alan (2.2) and Marcelo (2.4) and Linus (2.5) are doing
> > a good job with source control.

> Not really. We do a passable job. Stuff gets dropped, lost,
> deferred and forgotten, applied when it conflicts with other work
> - much of this stuff that software wouldnt actually improve on over a
> person

So the same result then :
We are 'satisfied with the current source code control system'
because there isn't a way currently available that would allow
for any significant benefit.

> > Although this seems annoying, it's just one facet of the
> > primary difference between Linux and a commercially based
> > kernel : if you want to know how something works and how
> > it's being developed, then you MUST participate, in this
> > and other mailing lists.

> That wont help you - most discussion occurs in private because l/k
> is too noisy and many key people dont read it.

...but if you are working with the code and you see something change
the mailing list is the place to ask, correct?

What I'm saying isn't so much that the mailing lists are complete,
but that you have to keep current with the community if you want to
keep current with its work.

> > There is no central product, so there can be no central bug track.
> > (see below)

> Rubbish. Ask the engineering world about fault tracking. You won't get
> "different products no central flaw tracking" you'll get
> extensive cross
> correlation, statistical tools and the like in any syste,
> where reliability
> matters

> Many kernel bug reports end up invisible to some of the developers.

Many kernel developers don't read LKML.
Isn't that their flaw?

Many bug reports don't end up in the right place.
(i.e. a Sparc patch on the LKML but not the Sparc-Linux mailing lists)

How is a central bug repository going to help?

For example. Red Hat's knowledge base is a piece of crap. It's
impossible to find anything because of the millions of variations
on different products.

You can't maintain a central bug repository for separate product
streams (Red Hat's kernel vs. "Stock" vs. Suse vs. VA, etc)
because there's too many variables. If you want a centralized
bug tracking system then you MUST use the same product or it
will end up tracking end-user bugs instead of developer bugs
because the developers won't use it.

I sincerely challenge you to propose a method for centralizing
bug tracking in the Linux kernel that _can_ be used by the
community as a whole. That means something that Linus would use
_and_ somebody who doesn't subscribe to LKML can use to find out
why he can't compile loop.o on his redhat 7.0 system with the
kernel he got from kernel.org a few weeks ago.

Dana Lacoste
Linux Developer
Ottawa, Canada

2001-12-18 15:19:25

by David Weinehall

[permalink] [raw]
Subject: Re: The direction linux is taking

On Tue, Dec 18, 2001 at 06:38:26AM -0800, M. Edward (Ed) Borasky wrote:
> On Tue, 18 Dec 2001, Eyal Sohya wrote:
>
> > I've watched this List and have some questions to ask
> > which i would appreciate are answered. Some might not
> > have definite answers and we might be divided on them.
>
> My opinions only!!
>
>
> > 1. Are we satisfied with the source code control system ?
>
> With CVS, probably -- it's open source and rather universally known.
> With the version control *process* ... well ... I personally favor a
> full SEI CMM level 2 or even level 3 process. Whether there are open
> source tools to facilitate that process is another story.
>
> > 2. Is there enough planning for documentation ? As another poster
> > mentioned, there are new API and we dont know about them.
>
> There is, as it turns out, a tremendous *amount* of documentation,
> although it is not as centralized as it could be. Again, I favor the SEI
> CMM model.
>
> > 3. There is no central bug tracking database. At least people should
> > know the status of the bugs they have found with some releases.
>
> Absolutely! Bug tracking and source / version control ought to be
> integrated and centralized.
>
> > 4. Aggressive nature of this mailing list itself may be a turn off to
> > many who would like to contribute.
>
> Well ... peer review / code walkthroughs are part of SEI CMM level 3
> IIRC, and peer review is an important part of the scientific process. We
> all have our opinions and our reasons for being here and levels of
> contribution we are willing and able to make. When all is said and done,
> more is said than done :)). A lot *is* getting done! The only things I
> would change about this list are a reliable digest, a *vastly* better
> search engine and a better mailing list manager than majordomo.

With SEI CMM level 3 for the kernel, complete testing and documentation,
we'd be able to release a new kernel every 5 months, with new drivers
2 years after release of the device, and support for new platforms
2-3 years after their availability, as opposed to 1-2 years before
(IA-64, for instance...)

We'd also kill off all the advantages that the bazaar-style development
style actually has, while gaining nothing in particular, except for
a slow machinery of paper-work. No thanks.

I don't complain when people do proper documentation and testing of
their work; rather the opposite, but it needs to be done on a volunteer
basis, not being forced by some standard. Do you really think Linus
would be able to take all the extra work of software engineering? Think
again. Do you honestly believe he'd accept doing so in a million years?
Fat chance.

Grand software engineering based on PSP/CMM/whatever is fine when you
have a clear goal in mind; a plan stating what to do, detailing
everything meticously. Not so for something that changes directions on
pure whim from one week to the next, with the only goal being
improvement, expansion and (sometimes) simplification. Yes, some people
have a grand plan for their subsystems (I'm fairly convinced that
Alexander Viro has some plans up his sleeve for the VFS, and I'm sure it
involves a lot of ideas from Plan 9. But this is pure speculation, of
course...) and there are some goals (such as the pending transition to a
bigger dev_t, CML2, kbuild 2.5 et al), but most development takes place
as follows: idea -> post on lkml -> long discussion -> implementation ->
long discussion (about petty details) -> inclusion/rejection -> possible
rehash of this...


Regards: David Weinehall
_ _
// David Weinehall <[email protected]> /> Northern lights wander \\
// Maintainer of the v2.0 kernel // Dance across the winter sky //
\> http://www.acc.umu.se/~tao/ </ Full colour fire </

2001-12-18 15:29:25

by Momchil Velikov

[permalink] [raw]
Subject: Re: The direction linux is taking

>>>>> "David" == David Weinehall <[email protected]> writes:

David> We'd also kill off all the advantages that the bazaar-style development

Bazaaar-style development ? What bazaar-style development ? Last I
heard most discussions are held in private and many key people don't
read lkml.

Regards,
-velco

2001-12-18 18:08:41

by John Alvord

[permalink] [raw]
Subject: RE: The direction linux is taking

On Tue, 18 Dec 2001, Dana Lacoste wrote:

> > > > 1. Are we satisfied with the source code control system ?
>
> > > Yes. Alan (2.2) and Marcelo (2.4) and Linus (2.5) are doing
> > > a good job with source control.
>
> > Not really. We do a passable job. Stuff gets dropped, lost,
> > deferred and forgotten, applied when it conflicts with other work
> > - much of this stuff that software wouldnt actually improve on over a
> > person
>
> So the same result then :
> We are 'satisfied with the current source code control system'
> because there isn't a way currently available that would allow
> for any significant benefit.
>
> > > Although this seems annoying, it's just one facet of the
> > > primary difference between Linux and a commercially based
> > > kernel : if you want to know how something works and how
> > > it's being developed, then you MUST participate, in this
> > > and other mailing lists.
>
> > That wont help you - most discussion occurs in private because l/k
> > is too noisy and many key people dont read it.
>
> ...but if you are working with the code and you see something change
> the mailing list is the place to ask, correct?
>
> What I'm saying isn't so much that the mailing lists are complete,
> but that you have to keep current with the community if you want to
> keep current with its work.
>
> > > There is no central product, so there can be no central bug track.
> > > (see below)
>
> > Rubbish. Ask the engineering world about fault tracking. You won't get
> > "different products no central flaw tracking" you'll get
> > extensive cross
> > correlation, statistical tools and the like in any syste,
> > where reliability
> > matters
>
> > Many kernel bug reports end up invisible to some of the developers.
>
> Many kernel developers don't read LKML.
> Isn't that their flaw?
>
> Many bug reports don't end up in the right place.
> (i.e. a Sparc patch on the LKML but not the Sparc-Linux mailing lists)
>
> How is a central bug repository going to help?
>
> For example. Red Hat's knowledge base is a piece of crap. It's
> impossible to find anything because of the millions of variations
> on different products.
>
> You can't maintain a central bug repository for separate product
> streams (Red Hat's kernel vs. "Stock" vs. Suse vs. VA, etc)
> because there's too many variables. If you want a centralized
> bug tracking system then you MUST use the same product or it
> will end up tracking end-user bugs instead of developer bugs
> because the developers won't use it.
>
> I sincerely challenge you to propose a method for centralizing
> bug tracking in the Linux kernel that _can_ be used by the
> community as a whole. That means something that Linus would use
> _and_ somebody who doesn't subscribe to LKML can use to find out
> why he can't compile loop.o on his redhat 7.0 system with the
> kernel he got from kernel.org a few weeks ago.

A little while ago, Linus argued (apparently) seriously that chaotic
development gave a better result that more tightly controlled
methods... That a larger space of solutions was searched and better
optimums were found.

Maybe the current development environment is planned chaos.

john alvord

2001-12-18 18:47:53

by Ryan Sweet

[permalink] [raw]
Subject: RE: The direction linux is taking

[...]
> > Many bug reports don't end up in the right place.
> > (i.e. a Sparc patch on the LKML but not the Sparc-Linux mailing lists)
[...]
> > For example. Red Hat's knowledge base is a piece of crap. It's
> > impossible to find anything because of the millions of variations
> > on different products.
[...]
> > I sincerely challenge you to propose a method for centralizing
> > bug tracking in the Linux kernel that _can_ be used by the
> > community as a whole. That means something that Linus would use
> > _and_ somebody who doesn't subscribe to LKML can use to find out
> > why he can't compile loop.o on his redhat 7.0 system with the
> > kernel he got from kernel.org a few weeks ago.

We have one already. Its called google. The trouble is that (as with any
other system one might propose) one has to know how to search effectively.

If any of the 800 folks who posted "loop.o" doesn't compile had bothered
to search google first with some decent parameters they would have seen
that the issue was identified and a solution posted almost immediately.
The same for the example of bugs going to the wrong list. If I have a
problem with sparc linux would I search only on the sparc linux list?
No, I would spend a few minutes to carefully craft some google queries and
weed through the results first.

Sure, there are many weaknesses: it takes a lot of intuition and sometimes
time to sort through massive results, and any discussion that doesn't go
into public lists doesn't get archived. I'm not convinced that any more
specific solution would solve these problems in a significant enough way
to justify changing things as they are currently.

I suppose another thing we lose is data mining for info about bugs that
have been reported/resolved/merged over time. I'm not sure if that sort
of info would be useful or not. I suspect that a clever enough perl monk
could even extract that sort of information from google without too much
effort.


--
Ryan Sweet <[email protected]>
Atos Origin Engineering Services
http://www.aoes.nl

2001-12-18 19:43:33

by Alan

[permalink] [raw]
Subject: Re: The direction linux is taking

> > is too noisy and many key people dont read it.
>
> ...but if you are working with the code and you see something change
> the mailing list is the place to ask, correct?

Or its submitter - thyats why names in changelogs are so important

> > Many kernel bug reports end up invisible to some of the developers.
>
> Many kernel developers don't read LKML.
> Isn't that their flaw?

Not really. If they read lk they would have no time to fix bugs.

> I sincerely challenge you to propose a method for centralizing
> bug tracking in the Linux kernel that _can_ be used by the
> community as a whole. That means something that Linus would use
> _and_ somebody who doesn't subscribe to LKML can use to find out
> why he can't compile loop.o on his redhat 7.0 system with the
> kernel he got from kernel.org a few weeks ago.

You don't centralise it. You ensure the data is in common(ish) formats
and let the search tools improve. Would you build google by making all
the web run at one site ?

2001-12-18 19:43:43

by Ken Brownfield

[permalink] [raw]
Subject: Re: The direction linux is taking

The CVS tree availability you mention parallels the FreeBSD tree, I
believe. However, assuming enough brain cycles, one knowledgable
maintainer seems to be a better method of maintaining a kernel.

Having one person maintain the entire CVS tree for an OS would be
laughable, but in the case of a kernel you have much tighter control
over what goes into a smaller, more delicate tree, with maintainers who
have access to hardware for device driver implementation, etc.
Especially when this person is a Linus/Alan/Marcelo, etc.

I've been following lkml for some time now, and I've been using patches
that get posted to the list. But I do so at my own risk, since I do not
have comprehensive knowledge of kernel internels. But even I can tell
that many of the patches posted are either bogus, are potentially
incorrect in subtle and/or complex ways, or are simply working around
user-space issues or other bugs.

There have been discussions on this list (and several off) about how
patches are dropped, ignored, etc. Part of this is due to what Linus
himself described as a conservative stabilization of 2.4 given VM and
other issues. I believe the other part is that one person (Marcelo in
this case) will have a hard time taking something as verbose as lkml
(plus the plethora of direct email that he must receive) and weed out
the noise from the signal. And then possibly having to spend time
finding a maintainer or someone knowledgable about a certain specific
area of the kernel to see if the signal is legitimate.

I think Alan compiled a ton of very useful stuff in the 2.4-ac series,
but (maybe he'll agree with my bs here :) I think 2.2-mainline and
2.4-mainline require different trade-offs.

What might take out a few birds with one stone is to have someone on
lkml become an "LKML MAINTAINER": collect patches and bug reports in a
central place. This would include:

1) The patch and/or bug report
2) The entire LKML thread, with "important" messages marked
3) Personal input, prioritization, severity info, etc.

This could be a bugtraq type site, or webrt, or something along those
lines. Something like kernel traffic, but useful. ;-) ;-)

What this gives is a very succinct source of data for the key kernel
developers and maintainers who don't necessarily have time to weed
through lkml -- they can check the web site occasionally, or even fairly
frequently, and spend a very minimal amount of time glancing through for
anything that sounds relevant.

I think this would also be a good place to track which patches make it
into stable vs. development kernels (2.4.x vs 2.5.x right now). But
this would NOT necessarily have crossover with the maintainers -- this
compilation would be unofficial and for informational purposes only.

This lkml maintainer would NOT be the sole source of lkml output --
clearly the public and direct sharing of patches should continue, but I
think compiling the best bits of lkml is 99% of the battle.

Just my two cents. If enough of the relevant folks think this is
worthwhile, I might try to set this up (in my free time between 24 and
27 hours a day...) since I'm already doing a small subpart of this for
my own purposes.

Thx,
--
Ken.
[email protected]


On Tue, Dec 18, 2001 at 04:09:00PM +0100, Dead2 wrote:
| > > > 1. Are we satisfied with the source code control system ?
| > >
| > > Yes. Alan (2.2) and Marcelo (2.4) and Linus (2.5) are doing
| > > a good job with source control.
| >
| > Not really. We do a passable job. Stuff gets dropped, lost,
| > deferred and forgotten, applied when it conflicts with other work
| > - much of this stuff that software wouldnt actually improve on over a
| > person
|
| What about having the Linux source code in a CVS tree where active/trusted
| driver-/module-maintainers are granted write access, and everyone else read
| access.
| After the patches are applied, people will test them out, and bugfixes will
| be applied when bugs are detected.
| Then, when the kernel-maintainer feels this or that sourcecode is ready for a
| .pre kernel, he puts it in the main kernel tree.
| (This would indeed pose a security risk, but who in their right mind would run
| a CVS snapshot on anything important, that's right _noone_ in their _right
| mind_)
|
| Of course this would require much maintenance, and possibly more than
| one kernel-maintainer. This because of how much easier it would become
| for driver-/module-maintainers to apply patches they believe would make
| things better. Cleanups would also be necessary from time to time..
| (cleanups = making the CVS identical to the main kernel tree again)
|
| Just my 2 cents..
|
| -=Dead2=-
|
|
|
|
| -
| To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
| the body of a message to [email protected]
| More majordomo info at http://vger.kernel.org/majordomo-info.html
| Please read the FAQ at http://www.tux.org/lkml/

2001-12-18 21:28:16

by Dead2

[permalink] [raw]
Subject: Re: The direction linux is taking (CVS)

----- Original Message -----
From: "Ken Brownfield" <[email protected]>

> The CVS tree availability you mention parallels the FreeBSD tree, I
> believe. However, assuming enough brain cycles, one knowledgable
> maintainer seems to be a better method of maintaining a kernel.

Then positive and negative sides should be gathered from their experiences
aswell, it can be good for the outcome to follow something that has
been tested thoroughly by others.

*snip*

> I've been following lkml for some time now, and I've been using patches
> that get posted to the list. But I do so at my own risk, since I do not
> have comprehensive knowledge of kernel internels. But even I can tell
> that many of the patches posted are either bogus, are potentially
> incorrect in subtle and/or complex ways, or are simply working around
> user-space issues or other bugs.

Therfore only trusted maintainers should have access. Normal deadly
people like me would have to contact the maintainer(s) for that sub-tree.

> What might take out a few birds with one stone is to have someone on
> lkml become an "LKML MAINTAINER": collect patches and bug reports in a
> central place. This would include:
>
> 1) The patch and/or bug report
> 2) The entire LKML thread, with "important" messages marked
> 3) Personal input, prioritization, severity info, etc.

Or even make a [email protected] address that would be
parsed manually of automaticly..

*snip-snip*
> --
> Ken.
> [email protected]

-=Dead2=-

> On Tue, Dec 18, 2001 at 04:09:00PM +0100, Dead2 wrote:
> | > > > 1. Are we satisfied with the source code control system ?
> | > >
> | > > Yes. Alan (2.2) and Marcelo (2.4) and Linus (2.5) are doing
> | > > a good job with source control.
> | >
> | > Not really. We do a passable job. Stuff gets dropped, lost,
> | > deferred and forgotten, applied when it conflicts with other work
> | > - much of this stuff that software wouldnt actually improve on over a
> | > person
> |
> | What about having the Linux source code in a CVS tree where active/trusted
> | driver-/module-maintainers are granted write access, and everyone else read
> | access.
> | After the patches are applied, people will test them out, and bugfixes will
> | be applied when bugs are detected.
> | Then, when the kernel-maintainer feels this or that sourcecode is ready for
a
> | .pre kernel, he puts it in the main kernel tree.
> | (This would indeed pose a security risk, but who in their right mind would
run
> | a CVS snapshot on anything important, that's right _noone_ in their _right
> | mind_)
> |
> | Of course this would require much maintenance, and possibly more than
> | one kernel-maintainer. This because of how much easier it would become
> | for driver-/module-maintainers to apply patches they believe would make
> | things better. Cleanups would also be necessary from time to time..
> | (cleanups = making the CVS identical to the main kernel tree again)
> |
> | Just my 2 cents..
> |
> | -=Dead2=-
> |
> |
> |
> |
> | -
> | To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> | the body of a message to [email protected]
> | More majordomo info at http://vger.kernel.org/majordomo-info.html
> | Please read the FAQ at http://www.tux.org/lkml/
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/


2001-12-19 10:02:11

by Dead2

[permalink] [raw]
Subject: Re: The direction linux is taking

> What might take out a few birds with one stone is to have someone on
> > lkml become an "LKML MAINTAINER": collect patches and bug reports in a
> > central place. This would include:
> >
> > 1) The patch and/or bug report
> > 2) The entire LKML thread, with "important" messages marked
> > 3) Personal input, prioritization, severity info, etc.
>
> Good idea but _who_ will do that?

Just a thought that popped into my head..
What about using some kind of BugTrack system for this?
Where people can report a bug, and file patches for them.
There would indeed be some situations where 2-3-4 maybe
more patches are filed to fix one bug. Then the maintainer
can easily look through the patches and descide what to use..

-=Dead2=-


2001-12-19 18:12:14

by Ken Brownfield

[permalink] [raw]
Subject: Re: The direction linux is taking

On Wed, Dec 19, 2001 at 09:27:41AM -0200, vda wrote:
| On Tuesday 18 December 2001 16:37, Ken Brownfield wrote:
| > The CVS tree availability you mention parallels the FreeBSD tree, I
| > believe. However, assuming enough brain cycles, one knowledgable
| > maintainer seems to be a better method of maintaining a kernel.
|
| As tree gets larger over time, Linus *and* Alan hacking on the single tree in
| CVS ought to be more productive than regular time consuming syncs between
| Linus and -ac trees (but requires higher level of mutual trust).

Yes, I agree, this is somewhat along the lines what I mentioned -- a
main maintainer (Linus in this case) has their own CVS tree that they
can have specific people work on. This example would be for the entire
kernel, and as you say trust would be required. The number of people in
this specific case would be countable on one hand (or perhaps two
fingers), whereas instances of tree sharing within sub-parts of the
kernel could be wider and looser.

My only negative reaction would be a single kernel tree shared among
many/all developers, like more than two.

--
Ken.
[email protected]

2001-12-19 18:06:33

by Ken Brownfield

[permalink] [raw]
Subject: Re: The direction linux is taking

That's one of the specific options I proposed. I think it would be best
if it were focussed on lkml reports rather than noise from everywhere,
and also if it kept the maintainers' interface as simple and explicit as
possible. And yes, who is the question that I'm pondering as well.

--
Ken.
[email protected]

On Wed, Dec 19, 2001 at 10:56:53AM +0100, Dead2 wrote:
| > What might take out a few birds with one stone is to have someone on
| > > lkml become an "LKML MAINTAINER": collect patches and bug reports in a
| > > central place. This would include:
| > >
| > > 1) The patch and/or bug report
| > > 2) The entire LKML thread, with "important" messages marked
| > > 3) Personal input, prioritization, severity info, etc.
| >
| > Good idea but _who_ will do that?
|
| Just a thought that popped into my head..
| What about using some kind of BugTrack system for this?
| Where people can report a bug, and file patches for them.
| There would indeed be some situations where 2-3-4 maybe
| more patches are filed to fix one bug. Then the maintainer
| can easily look through the patches and descide what to use..
|
| -=Dead2=-
|
|
| -
| To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
| the body of a message to [email protected]
| More majordomo info at http://vger.kernel.org/majordomo-info.html
| Please read the FAQ at http://www.tux.org/lkml/

2001-12-20 09:09:22

by kaih

[permalink] [raw]
Subject: Re: The direction linux is taking

[email protected] (Momchil Velikov) wrote on 18.12.01 in <[email protected]>:

> >>>>> "David" == David Weinehall <[email protected]> writes:
>
> David> We'd also kill off all the advantages that the bazaar-style
> development
>
> Bazaaar-style development ? What bazaar-style development ? Last I
> heard most discussions are held in private and many key people don't
> read lkml.

So? That's never been an essential ingredient of bazaar-style development
(though it can, of course, be rather useful).

And it's been long known that forums for successful projects, like l-k,
tend to grow until key developers look for another forum because they just
don't have the time to follow. It's unfortunate, but not everyone is Alan
Cox.

But, once a project *has* grown like that, there is usually another venue
(or several) available for those key developers. And remember, this being
open source, everyone who has the necessary basic expertise and enough
time can *become* one of them.

MfG Kai

2001-12-20 09:31:54

by Momchil Velikov

[permalink] [raw]
Subject: Re: The direction linux is taking

>>>>> "Kai" == Kai Henningsen <[email protected]> writes:

Kai> [email protected] (Momchil Velikov) wrote on 18.12.01 in <[email protected]>:
>> >>>>> "David" == David Weinehall <[email protected]> writes:
>>
David> We'd also kill off all the advantages that the bazaar-style
>> development
>>
>> Bazaaar-style development ? What bazaar-style development ? Last I
>> heard most discussions are held in private and many key people don't
>> read lkml.

Kai> So? That's never been an essential ingredient of bazaar-style development
Kai> (though it can, of course, be rather useful).

Are you sure you don't confuse "bazaar" with "shared source". Or SCSL ?

I'd suggest you read the relevant writing, especially the sentences,
which contain the phrase "beta-testers".

2001-12-23 14:11:47

by Eyal Sohya

[permalink] [raw]
Subject: Re: The direction linux is taking




>From: David Weinehall <[email protected]>
>To: "M. Edward (Ed) Borasky" <[email protected]>
>CC: Eyal Sohya <[email protected]>, [email protected]
>Subject: Re: The direction linux is taking
>Date: Tue, 18 Dec 2001 16:18:45 +0100
>
>On Tue, Dec 18, 2001 at 06:38:26AM -0800, M. Edward (Ed) Borasky wrote:
> > On Tue, 18 Dec 2001, Eyal Sohya wrote:
> >
> > > I've watched this List and have some questions to ask
> > > which i would appreciate are answered. Some might not
> > > have definite answers and we might be divided on them.
> >
> > My opinions only!!
> >
> >
> > > 1. Are we satisfied with the source code control system ?
> >
> > With CVS, probably -- it's open source and rather universally known.
> > With the version control *process* ... well ... I personally favor a
> > full SEI CMM level 2 or even level 3 process. Whether there are open
> > source tools to facilitate that process is another story.
> >
> > > 2. Is there enough planning for documentation ? As another poster
> > > mentioned, there are new API and we dont know about them.
> >
> > There is, as it turns out, a tremendous *amount* of documentation,
> > although it is not as centralized as it could be. Again, I favor the SEI
> > CMM model.
> >
> > > 3. There is no central bug tracking database. At least people should
> > > know the status of the bugs they have found with some releases.
> >
> > Absolutely! Bug tracking and source / version control ought to be
> > integrated and centralized.
> >
> > > 4. Aggressive nature of this mailing list itself may be a turn off to
> > > many who would like to contribute.
> >
> > Well ... peer review / code walkthroughs are part of SEI CMM level 3
> > IIRC, and peer review is an important part of the scientific process. We
> > all have our opinions and our reasons for being here and levels of
> > contribution we are willing and able to make. When all is said and done,
> > more is said than done :)). A lot *is* getting done! The only things I
> > would change about this list are a reliable digest, a *vastly* better
> > search engine and a better mailing list manager than majordomo.
>

No one is asking for a SEI CMM level type of model for kernel
development. A system of checks and development so that things
that used to work dont get broken is hardly too much to expect.

This isnt asking for too much isnt it ?

Is a bug database on drivers and kernel subsystems asking for
a SEI CMM level type model ?

i dont think so.

>With SEI CMM level 3 for the kernel, complete testing and documentation,
>we'd be able to release a new kernel every 5 months, with new drivers
>2 years after release of the device, and support for new platforms
>2-3 years after their availability, as opposed to 1-2 years before
>(IA-64, for instance...)
>
>We'd also kill off all the advantages that the bazaar-style development
>style actually has, while gaining nothing in particular, except for
>a slow machinery of paper-work. No thanks.
>
>I don't complain when people do proper documentation and testing of
>their work; rather the opposite, but it needs to be done on a volunteer
>basis, not being forced by some standard. Do you really think Linus
>would be able to take all the extra work of software engineering? Think
>again. Do you honestly believe he'd accept doing so in a million years?
>Fat chance.
>
>Grand software engineering based on PSP/CMM/whatever is fine when you
>have a clear goal in mind; a plan stating what to do, detailing
>everything meticously. Not so for something that changes directions on
>pure whim from one week to the next, with the only goal being
>improvement, expansion and (sometimes) simplification. Yes, some people
>have a grand plan for their subsystems (I'm fairly convinced that
>Alexander Viro has some plans up his sleeve for the VFS, and I'm sure it
>involves a lot of ideas from Plan 9. But this is pure speculation, of
>course...) and there are some goals (such as the pending transition to a
>bigger dev_t, CML2, kbuild 2.5 et al), but most development takes place
>as follows: idea -> post on lkml -> long discussion -> implementation ->
>long discussion (about petty details) -> inclusion/rejection -> possible
>rehash of this...
>
>
>Regards: David Weinehall
> _ _
> // David Weinehall <[email protected]> /> Northern lights wander \\
>// Maintainer of the v2.0 kernel // Dance across the winter sky //
>\> http://www.acc.umu.se/~tao/ </ Full colour fire </




_________________________________________________________________
Join the world?s largest e-mail service with MSN Hotmail.
http://www.hotmail.com

2001-12-23 14:14:37

by Eyal Sohya

[permalink] [raw]
Subject: Re: The direction linux is taking




>From: Alan Cox <[email protected]>
>To: [email protected] (Dana Lacoste)
>CC: [email protected] ('Eyal Sohya'), [email protected]
>Subject: Re: The direction linux is taking
>Date: Tue, 18 Dec 2001 15:04:13 +0000 (GMT)
>
> > > 1. Are we satisfied with the source code control system ?
> >
> > Yes. Alan (2.2) and Marcelo (2.4) and Linus (2.5) are doing
> > a good job with source control.

really ?
Do you know what good source control is ? i doubt it.

>
>Not really. We do a passable job. Stuff gets dropped, lost,
>deferred and forgotten, applied when it conflicts with other work
>- much of this stuff that software wouldnt actually improve on over a
>person
>
> > Although this seems annoying, it's just one facet of the
> > primary difference between Linux and a commercially based
> > kernel : if you want to know how something works and how
> > it's being developed, then you MUST participate, in this
> > and other mailing lists.
>
>That wont help you - most discussion occurs in private because l/k
>is too noisy and many key people dont read it.
>
> > > 3. There is no central bug tracking database. At least people
> > > should know the status of the bugs they have found with some
> > > releases.
> >
> > There is no central product, so there can be no central bug track.
> > (see below)
>
>Rubbish. Ask the engineering world about fault tracking. You won't get
>"different products no central flaw tracking" you'll get extensive cross
>correlation, statistical tools and the like in any syste, where reliability
>matters
>
>Many kernel bug reports end up invisible to some of the developers.


that is exactly my point.
>
>Alan




_________________________________________________________________
MSN Photos is the easiest way to share and print your photos:
http://photos.msn.com/support/worldwide.aspx

2001-12-23 14:18:47

by Eyal Sohya

[permalink] [raw]
Subject: Re: The direction linux is taking

But does our final arbiter intervene when he should ?
Would he step in to stop a discussion thread from becoming
a ego bash in the larger interests of the kernel ?


>From: Craig Christophel <[email protected]>
>To: "Eyal Sohya" <[email protected]>
>CC: [email protected]
>Subject: Re: The direction linux is taking
>Date: Tue, 18 Dec 2001 01:11:15 -0500
>
>The aggressive nature of the list is a result of people who have all spent
>a
>great deal of time researching kernel internals. It is much akin to a
>thesis
>proposal and defense that you would see in an educational environment. It
>may not be the most comfortable thing in the world, but it gets the base
>issues resolved, because if you do not know what is going on someone WILL
>tell you. -- and then you revise and defend. think of it as an academic
>discussion forum -- where people (usually) have the right to sound off.
>There is normally no harm done, although the recent MM discussions have
>been
>a bit heated.
>
>
>
>Craig.
>
>
>
>On Tuesday 18 December 2001 00:20, Eyal Sohya wrote:
> > I've watched this List and have some questions to ask
> > which i would appreciate are answered. Some might not
> > have definite answers and we might be divided on them.
> >
> > 1. Are we satisfied with the source code control system ?
> > 2. Is there enough planning for documentation ? As another
> > poster mentioned, there are new API and we dont know about
> > them.
> > 3. There is no central bug tracking database. At least people
> > should know the status of the bugs they have found with some
> > releases.
> > 4. Aggressive nature of this mailing list itself may be a
> > turn off to many who would like to contribute.
> >
> >
> >
> > _________________________________________________________________
> > MSN Photos is the easiest way to share and print your photos:
> > http://photos.msn.com/support/worldwide.aspx
> >
> > -
> > To unsubscribe from this list: send the line "unsubscribe linux-kernel"
>in
> > the body of a message to [email protected]
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
> > Please read the FAQ at http://www.tux.org/lkml/




_________________________________________________________________
MSN Photos is the easiest way to share and print your photos:
http://photos.msn.com/support/worldwide.aspx

2001-12-27 15:46:41

by Dana Lacoste

[permalink] [raw]
Subject: RE: The direction linux is taking

> > > > 1. Are we satisfied with the source code control system ?

> > > Yes. Alan (2.2) and Marcelo (2.4) and Linus (2.5) are doing
> > > a good job with source control.

> really ?
> Do you know what good source control is ? i doubt it.

why do you drop to personal insults? are your arguments that weak?

i'm a perforce admin. if you don't know what perforce is, you go
look it up. i used to be a CVS admin. i REALLY hope you know what
THAT is. and i've used clearcase and even SCCS/RCS in the past.

my point wasn't that marcelo and linus and allan are source control
systems. my point was that if you're looking for a properly source-
controlled project, THEN DON'T USE LINUX AND QUIT YOUR FUCKING WHINING.

(ok, that might be a bit harsh, but there have been quite a few
people here who think that the linux kernel should be maintained
in the same way as a closed source commercially run project. But
if it was, it wouldn't be _Linux_ any more.)

Linux is maintained by Linus (2.5), Allan (2.2), and Marcelo (2.4)
(and someone's doing 2.0 still, but i forget who :)
That's The Way It Is (tm) and trying to change that isn't going to
happen any time soon (nor, given the disparity between the
Linux Kernel and the Linux Distributions, should it be)

> >Not really. We do a passable job. Stuff gets dropped, lost,
> >deferred and forgotten, applied when it conflicts with other work
> >- much of this stuff that software wouldnt actually improve on over a
> >person

ahhh, so trying to tackle these problems would be a good idea.
For example, Marcelo's adoption of the "final -pre is (essentially)
unchanged when it becomes the formal release"

Q - Would CVS or Perforce or BitKeeper help fix these problems?
A - No, the problem is one of organization, not accountability

Maybe we should toss the original question and try to find ways
to solve the organizational problems instead?

> >Many kernel bug reports end up invisible to some of the developers.

> that is exactly my point.

so maybe we should make it clear (in the maintainers file, for example)
WHERE patches and bugs should be reported?

It sounds more like a reporting problem than a tracking problem : the
maintainers know which bugs have been fixed (or patches to fix the bugs
have been applied at least) so the only thing missing is that the
maintainers
have to know about the bugs. I don't think that a bugzilla-type central
bug reporting tool will help that, because the maintainers who don't read
LKML won't pay attention to bugzilla either.

--
Dana Lacoste
Ottawa, Canada

2001-12-27 16:01:53

by Rik van Riel

[permalink] [raw]
Subject: RE: The direction linux is taking

On Thu, 27 Dec 2001, Dana Lacoste wrote:

> > >Not really. We do a passable job. Stuff gets dropped, lost,
> > >deferred and forgotten, applied when it conflicts with other work
> > >- much of this stuff that software wouldnt actually improve on over a
> > >person

> Q - Would CVS or Perforce or BitKeeper help fix these problems?
> A - No, the problem is one of organization, not accountability
>
> Maybe we should toss the original question and try to find ways
> to solve the organizational problems instead?

Sounds like an idea, except that up to now I haven't seen
any suitable solution for this.

The biggest problem right now seems to be that of patches
being dropped, which is a direct result of the kernel
maintainers not having infinite time.

A system to solve this problem would have to make it easier
for the kernel maintainers to remember patches, while at the
same time saving them time. I guess it would have something
like the following ingredients:
1. remember the patches and their descriptions
2. have the possibility for other people (subsystem maintainers?)
to de-queue or update pending patches
3. check at each pre-release if the patches still apply, notify
the submitter if the patch no longer applies
4. make an easy "one-click" solution for the maintainers to apply
the patch and add a line to the changelog ;)
(all patches apply without rejects, patches which don't apply
have already been bounced back to the maintainer by #3)
5. after a new pre-patch, send the kernel maintainer a quick
overview of pending patches
6. patches can get different priorities assigned, so the kernel
maintainers can spend their time with the highest-priority
patches first
7. .. ?

All in all, if such a system is ever going to exist, it
needs to _reduce_ the amount of work the kernel maintainers
need to do, otherwise it'll never get used.

regards,

Rik
--
DMCA, SSSCA, W3C? Who cares? http://thefreeworld.net/

http://www.surriel.com/ http://distro.conectiva.com/

2001-12-27 16:24:12

by Alan

[permalink] [raw]
Subject: Re: The direction linux is taking

> All in all, if such a system is ever going to exist, it
> needs to _reduce_ the amount of work the kernel maintainers
> need to do, otherwise it'll never get used.

Tridge wrote the system you describe, several years ago. Its called
jitterbug but it doesnt help because Linus wont use it

2001-12-27 16:31:42

by Rik van Riel

[permalink] [raw]
Subject: Re: The direction linux is taking

On Thu, 27 Dec 2001, Alan Cox wrote:

> > All in all, if such a system is ever going to exist, it
> > needs to _reduce_ the amount of work the kernel maintainers
> > need to do, otherwise it'll never get used.
>
> Tridge wrote the system you describe, several years ago. Its called
> jitterbug but it doesnt help because Linus wont use it

I don't care about Linus, he drops so many bugfixes
his kernel have done nothing but suck rocks since the
2.1 era.

This system could be useful for people who _are_ maintainers,
however.

regards,

Rik
--
DMCA, SSSCA, W3C? Who cares? http://thefreeworld.net/

http://www.surriel.com/ http://distro.conectiva.com/

2001-12-27 16:43:43

by Alan

[permalink] [raw]
Subject: Re: The direction linux is taking

> I don't care about Linus, he drops so many bugfixes
> his kernel have done nothing but suck rocks since the
> 2.1 era.
>
> This system could be useful for people who _are_ maintainers,
> however.

In which case you'll find jitterbug on http://www.samba.org

2001-12-27 16:58:36

by Russell King

[permalink] [raw]
Subject: Re: The direction linux is taking

On Thu, Dec 27, 2001 at 04:33:50PM +0000, Alan Cox wrote:
> Tridge wrote the system you describe, several years ago. Its called
> jitterbug but it doesnt help because Linus wont use it

Speaking as someone who _does_ use a system for tracking patches, I
believe that patch management systems are a right pain in the arse.

If the quality of patches aren't good, then it throws you into a problem.
You have to provide people with a reason why you discarded their patch,
which provides people with the perfect opportunity to immediately start
bugging you about exactly how to make it better. If you get lots of
such patches, eventually you've got a mailbox of people wanting to know
how to make their patches better.

I envy Alan, Linus, and Marcelo for having the ability to silently drop
patches and wait for resends. I personally don't believe a patch tracking
system makes life any easier. Yes, it means you can't loose patches, but
it means you can't accidentally loose them on purpose. This, imho, makes
life very much harder.

I hope this makes some sort of sense 8)

--
Russell King ([email protected]) The developer of ARM Linux
http://www.arm.linux.org.uk/personal/aboutme.html

2001-12-27 17:03:27

by Thomas Capricelli

[permalink] [raw]
Subject: Re: The direction linux is taking



> > I don't care about Linus, he drops so many bugfixes
> > his kernel have done nothing but suck rocks since the
> > 2.1 era.
> >
> > This system could be useful for people who _are_ maintainers,
> > however.
>
> In which case you'll find jitterbug on http://www.samba.org



I disagree, jitterbug is cool, but it doesn't handle all the points Rik have
asked for. Especially point 3 : check that each patch apply and if not, send
a mail to the maintainer. Or points 4,5, ...

It shouldn't be difficult to write such a system, though. The problem is :
would that kind of system be used. I don't even ask about Linus : I know he
won't.


What about Marcelo, Alan and the main-kernel-part maintainers ?


Marcello, Alan, tell us you would use such a system, and I'm sure people
would try to setup something.
Too much people have actually spent time on developping tools especially for
the linux kernel. Tell us you'll use such a system, and we'll write/hack it.
You remember Linus saying he'll integrate kbuild and cml2 between 2.5.1 and
2.5.2 ? Do you still believe this ? I don't.

And, btw, jitterbug is too big for Linus. He's afraid of too complex system.
IMHO he could consider using a small&simple tool that fit exactly his view of
what kernel development is.
That is : how it's done today, but automaticated.
"he could". But how to know ? no way, no way.

regards,
Thomas


> > I don't care about Linus, he drops so many bugfixes
> > his kernel have done nothing but suck rocks since the
> > 2.1 era.
> >
> > This system could be useful for people who _are_ maintainers,
> > however.
>
> In which case you'll find jitterbug on http://www.samba.org

system, and I'm sure people would try to setup something.
Too much people have actually spent time on developping tools especially for
the linux kernel. Tell us you'll use such a system, and we'll write/hack it.

You remember Linus saying he'll integrate kbuild and cml2 between 2.5.1 and
2.5.2 ? Do you still believe this ? I don't.

And, btw, jitterbug is too big for Linus. He's afraid of too complex system.
IMHO he could consider using a small&simple tool that fit exactly his view of
what kernel development is.
That is : how it's done today, but automaticated.

regards,
Thomas

2001-12-27 17:11:46

by Rik van Riel

[permalink] [raw]
Subject: Re: The direction linux is taking

On Thu, 27 Dec 2001, Russell King wrote:

> I envy Alan, Linus, and Marcelo for having the ability to silently
> drop patches and wait for resends.

I'm not going to resend more than twice. If after that
a critical bugfix isn't applied, I'll put it in our
kernel RPM and the rest of the world has tough luck.

regards,

Rik
--
DMCA, SSSCA, W3C? Who cares? http://thefreeworld.net/

http://www.surriel.com/ http://distro.conectiva.com/

2001-12-27 17:26:57

by Erik Mouw

[permalink] [raw]
Subject: Re: The direction linux is taking

On Thu, Dec 27, 2001 at 03:11:01PM -0200, Rik van Riel wrote:
> On Thu, 27 Dec 2001, Russell King wrote:
>
> > I envy Alan, Linus, and Marcelo for having the ability to silently
> > drop patches and wait for resends.
>
> I'm not going to resend more than twice. If after that
> a critical bugfix isn't applied, I'll put it in our
> kernel RPM and the rest of the world has tough luck.

There is a difference between critical bugfixes and patches that don't
do the Right Thing [tm]. I've never seen Russell dropping the former.


Erik

--
J.A.K. (Erik) Mouw, Information and Communication Theory Group, Faculty
of Information Technology and Systems, Delft University of Technology,
PO BOX 5031, 2600 GA Delft, The Netherlands Phone: +31-15-2783635
Fax: +31-15-2781843 Email: [email protected]
WWW: http://www-ict.its.tudelft.nl/~erik/

2001-12-27 17:43:48

by Alan

[permalink] [raw]
Subject: Re: The direction linux is taking

> I envy Alan, Linus, and Marcelo for having the ability to silently drop
> patches and wait for resends. I personally don't believe a patch tracking

I go to great lengths to try and avoid that. People often get very
short replies but I try to make sure if the patch isnt queued to apply
they get a reply. Sometimes I have to sit on them for a week until I
understand why I don't like them.

The things I happily drop are people arguing about why I dropped their patch

2001-12-27 17:39:18

by Richard Gooch

[permalink] [raw]
Subject: Re: The direction linux is taking

Russell King writes:
> On Thu, Dec 27, 2001 at 04:33:50PM +0000, Alan Cox wrote:
> > Tridge wrote the system you describe, several years ago. Its called
> > jitterbug but it doesnt help because Linus wont use it
>
> Speaking as someone who _does_ use a system for tracking patches, I
> believe that patch management systems are a right pain in the arse.
>
> If the quality of patches aren't good, then it throws you into a
> problem. You have to provide people with a reason why you discarded
> their patch, which provides people with the perfect opportunity to
> immediately start bugging you about exactly how to make it better.
> If you get lots of such patches, eventually you've got a mailbox of
> people wanting to know how to make their patches better.

So you just do what Linus does: delete those questions without
replying. No matter what system you use, if you want to avoid an
overflowing mailbox, you either have to silently drop patches, and/or
silently drop questions/requests/begging letters. There isn't really
much difference between the two.

Regards,

Richard....
Permanent: [email protected]
Current: [email protected]

2001-12-27 17:44:28

by Alan

[permalink] [raw]
Subject: Re: The direction linux is taking

I have a system I am happy with, I save stuff that looks worth applying into
a TO_APPLY directory then merge it in logical chunks.

Alan

2001-12-27 17:53:58

by Alan

[permalink] [raw]
Subject: Re: The direction linux is taking

> So you just do what Linus does: delete those questions without
> replying. No matter what system you use, if you want to avoid an
> overflowing mailbox, you either have to silently drop patches, and/or
> silently drop questions/requests/begging letters. There isn't really
> much difference between the two.

The problem is that if Linus is simply ignoring you then you don't know
why, A simple "Clean up the ifdefs" would make a lot of difference. If
someone sent a patch its because they hit something they felt needed fixing
and as far as they can tell fixed it. If you want them to go elsewhere
ignore them, but its much more useful to give them at least brief answers
to actual patch files

2001-12-27 17:55:48

by Dave Jones

[permalink] [raw]
Subject: Re: The direction linux is taking

On Thu, 27 Dec 2001, Richard Gooch wrote:

> So you just do what Linus does: delete those questions without
> replying. No matter what system you use, if you want to avoid an
> overflowing mailbox, you either have to silently drop patches, and/or
> silently drop questions/requests/begging letters. There isn't really
> much difference between the two.

just a sidenote:
Patches cc'd to linux-kernel instead of just to Alan/Marcelo/Linus are
also far more likely to be 'rediscovered' sometime, bringing up
"why wasn't this merged?" mails when perhaps the time is better for
$maintainer to merge.

I spent post-xmas-lunch going through backlogged l-k mails, and found
a bunch of patches that fix small problems that never got merged.
(These bits ended up in -dj6, and what will be -dj7 btw, and will get
pushed to the relevant people soon.)

Note however, xmas comes but once a year, so someone else can pick
up the silently ignored stuff next time 8-)

Dave.

--
| Dave Jones. http://www.codemonkey.org.uk
| SuSE Labs

2001-12-27 18:03:18

by Andre Hedrick

[permalink] [raw]
Subject: Re: The direction linux is taking


I agree with Russell, because we now have a pile of shit known as 2.5.X.
Noting that 2.4.X is just a little less stinky, but still a rat-hole of
bug infested crap that fixes for are being ignored out of total
ignorrance.

I just wish I could spell.

Andre Hedrick
CEO/President, LAD Storage Consulting Group
Linux ATA Development
Linux Disk Certification Project

On Thu, 27 Dec 2001, Russell King wrote:

> On Thu, Dec 27, 2001 at 04:33:50PM +0000, Alan Cox wrote:
> > Tridge wrote the system you describe, several years ago. Its called
> > jitterbug but it doesnt help because Linus wont use it
>
> Speaking as someone who _does_ use a system for tracking patches, I
> believe that patch management systems are a right pain in the arse.
>
> If the quality of patches aren't good, then it throws you into a problem.
> You have to provide people with a reason why you discarded their patch,
> which provides people with the perfect opportunity to immediately start
> bugging you about exactly how to make it better. If you get lots of
> such patches, eventually you've got a mailbox of people wanting to know
> how to make their patches better.
>
> I envy Alan, Linus, and Marcelo for having the ability to silently drop
> patches and wait for resends. I personally don't believe a patch tracking
> system makes life any easier. Yes, it means you can't loose patches, but
> it means you can't accidentally loose them on purpose. This, imho, makes
> life very much harder.
>
> I hope this makes some sort of sense 8)
>
> --
> Russell King ([email protected]) The developer of ARM Linux
> http://www.arm.linux.org.uk/personal/aboutme.html
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>

2001-12-27 18:00:18

by Richard Gooch

[permalink] [raw]
Subject: Re: The direction linux is taking

Alan Cox writes:
> > So you just do what Linus does: delete those questions without
> > replying. No matter what system you use, if you want to avoid an
> > overflowing mailbox, you either have to silently drop patches, and/or
> > silently drop questions/requests/begging letters. There isn't really
> > much difference between the two.
>
> The problem is that if Linus is simply ignoring you then you don't
> know why, A simple "Clean up the ifdefs" would make a lot of
> difference. If someone sent a patch its because they hit something
> they felt needed fixing and as far as they can tell fixed it. If you
> want them to go elsewhere ignore them, but its much more useful to
> give them at least brief answers to actual patch files

Oh, don't get me wrong. I agree completely. A short two minute reply
is not that much to ask for, and I wish Linus would be more
responsive. And apply bugfix patches (I've been trying for weeks
to get him to apply my patches to fix a bunch of Oopses:-().
But years of observations tells me that Linus likes the way he does
things and doesn't care if others don't like it. I don't expect to see
much change there.

But the point I was making was that a patch management system doesn't
really make things harder to drop/ignore. If you're comfortable with
ignoring patches (which take *work* to construct), then it's no
stretch to ignore questions (which often take little work to send).

Regards,

Richard....
Permanent: [email protected]
Current: [email protected]

2001-12-27 18:04:08

by Richard Gooch

[permalink] [raw]
Subject: Re: The direction linux is taking

Dave Jones writes:
> On Thu, 27 Dec 2001, Richard Gooch wrote:
>
> > So you just do what Linus does: delete those questions without
> > replying. No matter what system you use, if you want to avoid an
> > overflowing mailbox, you either have to silently drop patches, and/or
> > silently drop questions/requests/begging letters. There isn't really
> > much difference between the two.
>
> just a sidenote:
> Patches cc'd to linux-kernel instead of just to Alan/Marcelo/Linus are
> also far more likely to be 'rediscovered' sometime, bringing up
> "why wasn't this merged?" mails when perhaps the time is better for
> $maintainer to merge.

Sure, although I post to l-k a URL and ChangeLog, and separately send
to Linus/Marcelo the actual patch. People grumble when I send kiB's
(or even kB's :-) to the list.

Regards,

Richard....
Permanent: [email protected]
Current: [email protected]

2001-12-27 18:08:18

by Linus Torvalds

[permalink] [raw]
Subject: Re: The direction linux is taking

In article <Pine.LNX.4.33L.0112271509570.12225-100000@duckman.distro.conectiva>,
Rik van Riel <[email protected]> wrote:
>On Thu, 27 Dec 2001, Russell King wrote:
>
>> I envy Alan, Linus, and Marcelo for having the ability to silently
>> drop patches and wait for resends.

This is absolutely true - it's a _very_ powerful thing. Old patches
simply grow stale: keeping track of them is not necessarily at all
useful, and can add more work than anything else.

One of the problems I had with jitterbug was that after a while the
thing just grew a lot, and I spent a lot of time with a cumbersome web
interface just acknowledging the patches. And that was despite the fact
that not very many people actually actively used jitterbug to submit
patches to me, so I could see it just getting a _lot_ worse.

>I'm not going to resend more than twice. If after that
>a critical bugfix isn't applied, I'll put it in our
>kernel RPM and the rest of the world has tough luck.

Which, btw, explains why I don't consider you a kernel maintainer, Rik,
and I don't tend to apply any patches at all from you. It's just not
worth my time to worry about people who aren't willing to sustain their
patches.

When Al Viro sends me a patch that I apply, and later sends me a fix to
it that I miss for whatever reason, I can feel comfortable in the
knowledge that he _will_ follow up, not just whine. This makes me very
willing to apply his patches in the first place.

Replace "Al Viro" with Jeff Garzik, David Miller, Alan Cox, etc etc. See
my point?

This is not about technology. This is about sustainable development.
The most important part to that is the developers themselves - I refuse
to put myself in a situation where _I_ need to scale, because that would
be stupid - people simply do not scale. So I require others to do more
of the work. Think distributed development.

Note that things like CVS do not help the fundamental problem at all.
They allow automatic acceptance of patches, and positively _encourage_
people to "dump" their patches on other people, and not act as real
maintainers.

We've seen this several times in Linux - David, for example, used to
maintain his CVS tree, and he ended up being rather frustrated about
having to then maintain it all and clean up the bad parts because I
didn't want to apply them (and he didn't really want me to) and he
couldn't make people clean up themselves because "once it was in, it was
in".

I know that source control advocates say that using source control makes
it easy to revert bad stuff, but that's simply not TRUE. It's _not_
easy to revert bad stuff. The only way to handle bad stuff is to make
people _responsible_ for their own sh*t, and have them maintain it
themselves.

And you refuse to do that, and then you complain when others do not want
to maintain your code for you.

Linus

2001-12-27 18:07:08

by Dave Jones

[permalink] [raw]
Subject: Re: The direction linux is taking

On Thu, 27 Dec 2001, Richard Gooch wrote:

> Sure, although I post to l-k a URL and ChangeLog, and separately send
> to Linus/Marcelo the actual patch. People grumble when I send kiB's
> (or even kB's :-) to the list.

Good enough. (In fact, preferable imo, as long as the patch stays
at the url if still relevant/unapplied).

The "I sent a patch n times to linus without ccing l-k and he didn't
apply it" case however, is a lost one imo.

Dave.

--
| Dave Jones. http://www.codemonkey.org.uk
| SuSE Labs

2001-12-27 18:18:58

by Richard Gooch

[permalink] [raw]
Subject: Re: The direction linux is taking

Dave Jones writes:
> On Thu, 27 Dec 2001, Richard Gooch wrote:
>
> > Sure, although I post to l-k a URL and ChangeLog, and separately send
> > to Linus/Marcelo the actual patch. People grumble when I send kiB's
> > (or even kB's :-) to the list.
>
> Good enough. (In fact, preferable imo, as long as the patch stays
> at the url if still relevant/unapplied).

Oh, sure. <checking archive>. I've still got a devfs patch dating back
to 20-AUG-1998 on my ftp site. I think that's long enough :-)

> The "I sent a patch n times to linus without ccing l-k and he didn't
> apply it" case however, is a lost one imo.

Agreed. Patches need review. It's actually quite frustrating when you
post patches, wait a week, send it off to Linus/Marcelo, and when the
new kernel comes out, suddenly bug reports come out of the woodwork.

Regards,

Richard....
Permanent: [email protected]
Current: [email protected]

2001-12-27 18:25:18

by Rik van Riel

[permalink] [raw]
Subject: Re: The direction linux is taking

On Thu, 27 Dec 2001, Linus Torvalds wrote:

> >I'm not going to resend more than twice. If after that
> >a critical bugfix isn't applied, I'll put it in our
> >kernel RPM and the rest of the world has tough luck.
>
> Which, btw, explains why I don't consider you a kernel maintainer,
> Rik, and I don't tend to apply any patches at all from you. It's just
> not worth my time to worry about people who aren't willing to sustain
> their patches.

OK, I'll setup something to automatically send you patches
as long as they're not applied, don't get any reaction and
still apply cleanly.

regards,

Rik
--
DMCA, SSSCA, W3C? Who cares? http://thefreeworld.net/

http://www.surriel.com/ http://distro.conectiva.com/

2001-12-27 18:39:08

by Russell King

[permalink] [raw]
Subject: Re: The direction linux is taking

On Thu, Dec 27, 2001 at 10:59:55AM -0700, Richard Gooch wrote:
> Oh, don't get me wrong. I agree completely. A short two minute reply
> is not that much to ask for, and I wish Linus would be more
> responsive.

Lets give a good instance of where a "two minute reply" doesn't work.

Patch 816/1:
http://www.arm.linux.org.uk/developer/patches/?action=viewpatch&id=816/1

Patch received - 30 November. Comment made - 13 December. Reply - 13th
December - "Could you be more detailed on these points".

Well, the comment that's there was written over a couple of hours or more.
Why? Because it had interdependencies on a number of other patches to see
what was going on behind some of the indirections and so forth. That mail
hasn't had a reply yet is:

- its buried behind 200 other messages, so its out of sight.
- it would require me to stop and think about it for significantly
longer to work out what this patch and the other patches were
trying to do.

Oh, don't get me wrong - I'm not about to give up my patch system! It
does serve some very important purposes in making my life easier:

- not loosing patches I want to apply under a mountain of email.
- lets other people find patches that might be useful, but weren't
applied, and see the feedback on the patch.

I'd encourage anyone who wants to follow up this email to go and look
at the patches in question first - probably the easiest way is to go to:

http://www.home.arm.linux.org.uk/developer/patches/

type 'ipaq' into the "search for patch summaries containing" box and hit
enter. Note that most of the ipaq patches remaining depend on 816/1.

--
Russell King ([email protected]) The developer of ARM Linux
http://www.arm.linux.org.uk/personal/aboutme.html

2001-12-27 18:37:48

by Dave Jones

[permalink] [raw]
Subject: Re: The direction linux is taking

On Thu, 27 Dec 2001, Linus Torvalds wrote:

> This is absolutely true - it's a _very_ powerful thing. Old patches
> simply grow stale: keeping track of them is not necessarily at all
> useful, and can add more work than anything else.

*nod*, until they get scooped up into another tree -ac, -dj, -whatever
and fed to you whenever you're in the mood for resyncing.

> This is not about technology. This is about sustainable development.
> The most important part to that is the developers themselves - I refuse
> to put myself in a situation where _I_ need to scale, because that would
> be stupid - people simply do not scale. So I require others to do more
> of the work. Think distributed development.

Absolutely. When I decided to take on carrying the 2.4 patches in sync
with 2.5, I knew I was undertaking something of no small order.
Scooping up forward port patches, and silent-drop bits from l-k
is almost a full time job in itself when yourself and Marcelo release
kernels in quick succession 8-)

And when you're ready to resync what I've got so far (currently ~3mb),
it's going to be another full time job splitting it into bits to feed
you linus-bite-sized chunks. (ObSidenote: When this time comes btw,
if maintainers of relevant parts want to feed Linus their relevant
parts from my tree, that would be appreciated, and would keep _my_ load
down :-)

> We've seen this several times in Linux - David, for example, used to
> maintain his CVS tree, and he ended up being rather frustrated about
> having to then maintain it all and clean up the bad parts because I
> didn't want to apply them (and he didn't really want me to) and he
> couldn't make people clean up themselves because "once it was in, it was
> in".

"Used to" ? cvs @ vger.samba.org was still being maintained before
I went on xmas vacation. Did I miss something ?

Dave.

--
| Dave Jones. http://www.codemonkey.org.uk
| SuSE Labs

2001-12-27 18:42:30

by John Alvord

[permalink] [raw]
Subject: Re: The direction linux is taking

On Thu, 27 Dec 2001 10:38:28 -0700, Richard Gooch
<[email protected]> wrote:

>Russell King writes:
>> On Thu, Dec 27, 2001 at 04:33:50PM +0000, Alan Cox wrote:
>> > Tridge wrote the system you describe, several years ago. Its called
>> > jitterbug but it doesnt help because Linus wont use it
>>
>> Speaking as someone who _does_ use a system for tracking patches, I
>> believe that patch management systems are a right pain in the arse.
>>
>> If the quality of patches aren't good, then it throws you into a
>> problem. You have to provide people with a reason why you discarded
>> their patch, which provides people with the perfect opportunity to
>> immediately start bugging you about exactly how to make it better.
>> If you get lots of such patches, eventually you've got a mailbox of
>> people wanting to know how to make their patches better.
>
>So you just do what Linus does: delete those questions without
>replying. No matter what system you use, if you want to avoid an
>overflowing mailbox, you either have to silently drop patches, and/or
>silently drop questions/requests/begging letters. There isn't really
>much difference between the two.
>
> Regards,
>
> Richard....

Sounds like IP translated into human systems. We aren't surprised when
a UDP packet is silently dropped for one of a thousand reasons.

john alvord

2001-12-27 18:49:58

by Russell King

[permalink] [raw]
Subject: Re: The direction linux is taking

On Thu, Dec 27, 2001 at 10:38:28AM -0700, Richard Gooch wrote:
> So you just do what Linus does: delete those questions without
> replying.

See Erik Mouw's reply for enlightenment. But yes, some patches do sit
in the patch system without explaination for some time, until I find
the incentive or interest to do something with them.

(Side note: it would be interesting to find out how many people believe
that those 17 patches sitting in my incoming queue should be applied to
a 2.4 stable kernel tree:

http://www.arm.linux.org.uk/developer/patches/?action=section&section=0

Anything above and including 837/1 are 2.5 material anyway.)

--
Russell King ([email protected]) The developer of ARM Linux
http://www.arm.linux.org.uk/personal/aboutme.html

2001-12-27 19:01:48

by Linus Torvalds

[permalink] [raw]
Subject: Re: The direction linux is taking


On Thu, 27 Dec 2001, Rik van Riel wrote:
>
> OK, I'll setup something to automatically send you patches
> as long as they're not applied, don't get any reaction and
> still apply cleanly.

No.

Did you read the part about "maintainership" at all?

I ignore automatic emails, the same way I ignore spam. Automating
patch-sending is _not_ maintainership.

Linus

2001-12-27 19:16:59

by Rik van Riel

[permalink] [raw]
Subject: Re: The direction linux is taking

On Thu, 27 Dec 2001, Linus Torvalds wrote:
> On Thu, 27 Dec 2001, Rik van Riel wrote:
> >
> > OK, I'll setup something to automatically send you patches
> > as long as they're not applied, don't get any reaction and
> > still apply cleanly.
>
> No.
>
> Did you read the part about "maintainership" at all?

Of course the patch will be updated when needed, but I still
have a few 6-month old patches lying around that still work
as expected and don't need any change.

I see absolutely no reason to not automate the resending of
these patches, once they need maintenance again I'll maintain
them.

> I ignore automatic emails, the same way I ignore spam. Automating
> patch-sending is _not_ maintainership.

Silently dropping bugfixes on the floor is not maintainership,
either.

regards,

Rik
--
DMCA, SSSCA, W3C? Who cares? http://thefreeworld.net/

http://www.surriel.com/ http://distro.conectiva.com/

2001-12-27 19:28:01

by Linus Torvalds

[permalink] [raw]
Subject: Re: The direction linux is taking


On Thu, 27 Dec 2001, Dave Jones wrote:
> On Thu, 27 Dec 2001, Linus Torvalds wrote:
>
> > This is absolutely true - it's a _very_ powerful thing. Old patches
> > simply grow stale: keeping track of them is not necessarily at all
> > useful, and can add more work than anything else.
>
> *nod*, until they get scooped up into another tree -ac, -dj, -whatever
> and fed to you whenever you're in the mood for resyncing.

But that's nothing more than "somebody else maintains them".

I realize that quite often the author of the patch is not going to be its
maintainer, which is exactly why all the other trees are so useful.

Everybody should realize that "outside trees" are not a rogue thing. They
are _very_ important, for several reasons:

- competition keeps people honest. If I was the only holder of the keys,
nobody would even _know_ if I was corrupt. And nobody could choose with
his feet.

Look at politics: if you don't have choices, the one choice _will_ be
corrupt even if it started out with all the best intentions. The old
adage there is "Power corrupts. Absolute power corrupts absolutely".

- Different taste. Let's face it, a lot of programming is about having
taste. Sometimes I don't like the way things are done, and people prove
me wrong by other means. See the whole thing about the VM stuff with
Andrea's patches - one of the reasons I hadn't applied the much earlier
patches by Andrea was that I didn't like the zone-balancing approach.

Having external trees is _crucial_ for allowing different approaches to
co-exist, in order to show their strengths and weaknesses. And I tend
to be fairly open to admitting when I did something wrong, and somebody
else had a better tree. At least I _try_.

- Different goals. Many of the commercial vendors have vendor needs, and
they (correctly) think that those needs are the most important thing,
while I don't care about vendors and thus have different priorities.

Again, multiple trees are absolutely required to make this work.

- And imperfect patch retention. There's no question that I drop patches,
some bad, but many good. And that's going to be true of _anybody_ who
maintains anything, except somebody who just accepts anything without
question (eg CVS).

I don't think I've ever spoken out against things like -ac, -dj and -aa: I
sometimes have to explain why I do not merge things whole-sale (which
would certainly be _technically_ the easiest solution much of the time),
and I often disagree with some part of the patch, but I'm actually
surprised how often I have to _defend_ having many trees.

Just a historical note: one of the things I hated most about Minix was
that while Andrew Tanenbaum allowed external patches to the system, nobody
else could make a whole distribution. Which meant that while there existed
many trees and maintainers that were "better" (notably Bruce Evans, who
was considered to be a God of Minix), they were really painful to use, in
that you had to always do it from patches.

I fully _expect_ that somebody better comes along. At some point, more
people will simply be using the -dj tree (or whatever), and that's fine.

> And when you're ready to resync what I've got so far (currently ~3mb),
> it's going to be another full time job splitting it into bits to feed
> you linus-bite-sized chunks. (ObSidenote: When this time comes btw,
> if maintainers of relevant parts want to feed Linus their relevant
> parts from my tree, that would be appreciated, and would keep _my_ load
> down :-)

This sounds absolutely wonderful..

Note that you will notice that it's a _huge_ undertaking, and one of the
things that Alan complained about was how the fact that _I_ avoid scaling
meant that he had to scale more. I think it's a very valid complaint, and
it may make a whole lot more sense (if it is possible) to have different
people caring about different parts.

Note that this may not be possible, due to lack of modularity. We've had
to actively change the tree layout of the kernel before just to make it
easier to maintain over several people. Which is painful, but not
certainly not impossible still..

> "Used to" ? cvs @ vger.samba.org was still being maintained before
> I went on xmas vacation. Did I miss something ?

Does he allow the wide and uncoordinated write access that he used to
allow? I thought he basically shut that down, and only allows a few people
now, exactly to avoid getting too horrible merge issues..

Linus

2001-12-27 19:31:41

by Linus Torvalds

[permalink] [raw]
Subject: Re: The direction linux is taking


On Thu, 27 Dec 2001, Rik van Riel wrote:
>
> Of course the patch will be updated when needed, but I still
> have a few 6-month old patches lying around that still work
> as expected and don't need any change.

Sure. Automatic re-mailing can be part of the maintainership, if the
testing of the validity of the patch is also automated (ie add a automated
note that says that it has been verified).

It's just that I actually _have_ had people who just put "mail torvalds <
crap" in their cron routines. It quickly caused them to become part of my
spam-filter, and thus _nothing_ ever showed up from them, whether
automated or not..

Linus

2001-12-27 19:33:31

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: Re: The direction linux is taking

Em Thu, Dec 27, 2001 at 07:37:16PM +0100, Dave Jones escreveu:
> On Thu, 27 Dec 2001, Linus Torvalds wrote:
> "Used to" ? cvs @ vger.samba.org was still being maintained before
> I went on xmas vacation. Did I miss something ?

but only few, few people have write access to the VGER cvs tree, and I see,
for example, DaveM reverting patches by Kanoj, etc. Maybe in the past way
more people had write access and that caused the problems Linus mentinoned.

- Arnaldo

2001-12-27 19:47:23

by Rik van Riel

[permalink] [raw]
Subject: Re: The direction linux is taking

On Thu, 27 Dec 2001, Linus Torvalds wrote:
> On Thu, 27 Dec 2001, Rik van Riel wrote:
> >
> > Of course the patch will be updated when needed, but I still
> > have a few 6-month old patches lying around that still work
> > as expected and don't need any change.
>
> Sure. Automatic re-mailing can be part of the maintainership, if the
> testing of the validity of the patch is also automated (ie add a
> automated note that says that it has been verified).

Patch-bombing you with useless stuff has never been my
objective. I just want to make sure valid patches get
re-sent to you as long as there is a reason to believe
they still need to be sent.

As soon as any hint arrives that the patch shouldn't be
sent right now (a change was made to any of the files the
patch applies to, I see something suspect in the changelog,
the patch was applied, a reply was mailed to the patch...)
the patch will be moved away for manual inspection.

I guess I'll also build in some kind of backoff to make sure
the patch gets sent less often if you're not interested or too
busy.

regards,

Rik
--
DMCA, SSSCA, W3C? Who cares? http://thefreeworld.net/

http://www.surriel.com/ http://distro.conectiva.com/

2001-12-27 19:57:43

by Richard Gooch

[permalink] [raw]
Subject: Re: The direction linux is taking

Rik van Riel writes:
> On Thu, 27 Dec 2001, Linus Torvalds wrote:
> > On Thu, 27 Dec 2001, Rik van Riel wrote:
> > >
> > > Of course the patch will be updated when needed, but I still
> > > have a few 6-month old patches lying around that still work
> > > as expected and don't need any change.
> >
> > Sure. Automatic re-mailing can be part of the maintainership, if the
> > testing of the validity of the patch is also automated (ie add a
> > automated note that says that it has been verified).
>
> Patch-bombing you with useless stuff has never been my
> objective. I just want to make sure valid patches get
> re-sent to you as long as there is a reason to believe
> they still need to be sent.
>
> As soon as any hint arrives that the patch shouldn't be
> sent right now (a change was made to any of the files the
> patch applies to, I see something suspect in the changelog,
> the patch was applied, a reply was mailed to the patch...)
> the patch will be moved away for manual inspection.
>
> I guess I'll also build in some kind of backoff to make sure
> the patch gets sent less often if you're not interested or too
> busy.

If you get this working nicely, it might even be a generally useful
thing. A set of perl scripts and easy interface commands could prove
popular. I would certainly find it convenient to have a patch
retransmission system that re-sent patches every time a new pre-patch
came out, and emailed me when the patch no longer applies. If it could
automatically de-queue when the patch is applied, or when I manually
remove it, that would be even better. And if I make an update to a
queued patch, it obsoletes the old one, that would be good too.

Regards,

Richard....
Permanent: [email protected]
Current: [email protected]

2001-12-27 20:07:55

by Rik van Riel

[permalink] [raw]
Subject: Re: The direction linux is taking

On Thu, 27 Dec 2001, Richard Gooch wrote:

> If you get this working nicely, it might even be a generally useful
> thing. A set of perl scripts and easy interface commands could prove
> popular. I would certainly find it convenient to have a patch
> retransmission system that re-sent patches every time a new pre-patch
> came out, and emailed me when the patch no longer applies.

... or compiles, or applies with an offset

> If it could automatically de-queue when the patch is applied, or when
> I manually remove it, that would be even better.

... or when somebody replies to the patch and the reply
gets caught by a program invoked from your .procmailrc

If Linus replies he has seen the patch, don't keep
bombing him.

Of course when the patch gets dequeued, the program should
send you a mail with the reason.

> And if I make an update to a queued patch, it obsoletes the old one,
> that would be good too.

Good one, this needs to be added.

Any more requirements / ideas / volunteers / ... ?

(and remember, this thing is designed to make Linus his life
easier, too)

regards,

Rik
--
DMCA, SSSCA, W3C? Who cares? http://thefreeworld.net/

http://www.surriel.com/ http://distro.conectiva.com/

2001-12-27 20:15:45

by Linus Torvalds

[permalink] [raw]
Subject: Re: The direction linux is taking


On Thu, 27 Dec 2001, Rik van Riel wrote:
> > came out, and emailed me when the patch no longer applies.
>
> ... or compiles, or applies with an offset

Good.

We actually talked inside Transmeta about doing a lot of this automation
centralized (and OSDL took up some of that idea), but yes, from a resource
usage sanity standpoint this is something that _trivially_ can be done at
the sending side, and thus scales out perfectly (while trying to do it at
the receiving end requires some _mondo_ hardware that definitely doesn't
scale, especially for the "compiles cleanly" part).

Linus

2001-12-27 20:10:55

by Larry McVoy

[permalink] [raw]
Subject: Re: The direction linux is taking

On Thu, Dec 27, 2001 at 06:05:40PM +0000, Linus Torvalds wrote:
> Note that things like CVS do not help the fundamental problem at all.
> They allow automatic acceptance of patches, and positively _encourage_
> people to "dump" their patches on other people, and not act as real
> maintainers.

Huh. I'm not sure I understand this. Once you accept a patch into the
mainline source, are these people still supposed to maintain that patch?
I would think the patch is now sort of dead, and any subsequent changes
are a new patch, right? If so, what I'm missing is how a source
management system makes a difference in this case, it seems sort of
orthogonal.

> We've seen this several times in Linux - David, for example, used to
> maintain his CVS tree, and he ended up being rather frustrated about
> having to then maintain it all and clean up the bad parts because I
> didn't want to apply them (and he didn't really want me to) and he
> couldn't make people clean up themselves because "once it was in, it was
> in".

Isn't this a limitation of CVS? I really don't want to get into a
"BitKeeper is better" discussion, but the PPC guys use BK and manage to
extract the right parts of the tree to send you as patches. In fact, BK
can extract any logical change as a patch with "bk export -tpatch <rev>".
If Dave had been using BK would that have helped or not?

> I know that source control advocates say that using source control makes
> it easy to revert bad stuff, but that's simply not TRUE. It's _not_
> easy to revert bad stuff.

It's trivial to revert bad stuff if other stuff hasn't come to depend
on that bad stuff, assuming a reasonable SCM system. There are really
two issues here: one is the bookkeeping necessary to be able to say
"make this patch go away", and BK does that with a "bk cset -x<rev>",
but the second is much harder. The second is "how do I undo this patch
now that other stuff has built on it?". Where "built on it" means that
if I were to reverse patch the files, the reverse patch will have rejects.

If you can deal with #2, BK can deal with #1. And I can give you help
with #2 in the form of showing you what changed and why. It's basically
the same problem as merging and we do that well.

> The only way to handle bad stuff is to make
> people _responsible_ for their own sh*t, and have them maintain it
> themselves.

Isn't this just a "reject" button on the patch?
--
---
Larry McVoy lm at bitmover.com http://www.bitmover.com/lm

2001-12-27 20:17:35

by Dave Jones

[permalink] [raw]
Subject: Re: The direction linux is taking

On Thu, 27 Dec 2001, Linus Torvalds wrote:

> > > This is absolutely true - it's a _very_ powerful thing. Old patches
> > > simply grow stale: keeping track of them is not necessarily at all
> > > useful, and can add more work than anything else.
> > *nod*, until they get scooped up into another tree -ac, -dj, -whatever
> > and fed to you whenever you're in the mood for resyncing.
> But that's nothing more than "somebody else maintains them".

Absolutely. Reading every patch sent to Linux-Kernel is a big task.
(And this is without the patches sent to you that don't make it to l-k)
After spending post-xmas reading through old postings catching up, and
picking up some obvious small fixes still not merged, I'm already starting
to fall behind. It's little wonder that some bits fall through the cracks
sometimes.

CCing someone like Alan or myself or anyone else maintaining a jumbo
sized' tree stands more likelyhood of something hitting Linus when it gets
around to mergetime, but remember that I'm only human too, and I too have
scaling issues.

> I realize that quite often the author of the patch is not going to be its
> maintainer, which is exactly why all the other trees are so useful.

You take the words right out of my mouth.

> Look at politics: if you don't have choices, the one choice _will_ be
> corrupt even if it started out with all the best intentions. The old
> adage there is "Power corrupts. Absolute power corrupts absolutely".

Totally. Though things will no doubt get interesting if Rik & Andrea
both come up with patches to acheive the same goal differently.
Sometimes choice is good. The -ac vs -linus VM scuffle was certainly an
interesting thing to watch, but I'm glad that at the time I wasn't
maintaining a 'fork'. (Alan had the benefit of neither of those involved
worked for Red Hat). If Rik VM-nextgeneration becomes "vm of choice" for
2.6, and I choose to stick with Andrea-vm I've no doubt at all I'll hear
complaints. But to reiterate: competition is good, and I'll not let
politics spoil that.

> - Different taste. Let's face it, a lot of programming is about having
> taste. Sometimes I don't like the way things are done, and people prove
> me wrong by other means. See the whole thing about the VM stuff with
> Andrea's patches - one of the reasons I hadn't applied the much earlier
> patches by Andrea was that I didn't like the zone-balancing approach.

*nod*, I've no doubt you'll have issues with some bits I've got lined
up already, that's fine. If we need a better solution for something in 2.5
than a "carried forward from 2.4" fix, I've no problem with that.
Some of the bits I'll be merging in a few releases time, I've also
no intention of feeding you, at least unless the maintainer of
said code asks it to be pushed your way at some point.

> Having external trees is _crucial_ for allowing different approaches to
> co-exist, in order to show their strengths and weaknesses. And I tend
> to be fairly open to admitting when I did something wrong, and somebody
> else had a better tree. At least I _try_.

*nod*. The only situation that bothers me is the situation that could
arise where we have a dozen different 'trees' none of which will apply
against each other. I'm trying to have this not happen by merging anything
that looks sensible. I'll not do things like Alan did, merging new archs,
new filesystems (or other major feature, 2.5 after all is the devel tree,
not -dj).

I'm already 3mb away from your tree, I've figured I've already crossed the
threshold, now it's time to jump the chasm. Resync time may take a while,
but for as long as I've got my tree in sync, this isn't a problem.
(Insert dave run over by bus analogy here). And I'll happily maintain
-dj for at least as long as I find it a fun and interesting challenge.
(Even if this means overspill to 2.6-dj patches to feed Marcelo/whoever)

> I don't think I've ever spoken out against things like -ac, -dj and -aa: I
> sometimes have to explain why I do not merge things whole-sale (which
> would certainly be _technically_ the easiest solution much of the time),
> and I often disagree with some part of the patch, but I'm actually
> surprised how often I have to _defend_ having many trees.

Half a dozen or so trees with different goals isn't such a bad thing.
If for eg someone else started doing what I've been doing the last few
weeks, anyone wanting to experiment with 2.5 now has to dig through
patches & changelogs trying to figure out which one to use/hack on.
Nightmare if $developer wants a feature from -dj and one from -whoever.
But then again, you learn quite a bit from trying to merge competing trees 8)

> I fully _expect_ that somebody better comes along. At some point, more
> people will simply be using the -dj tree (or whatever), and that's fine.

If people want to do that whilst you rip apart a subsystem making it
unusable for the majority, I'll continue to merge the non-broken bits.

The one thing I want to make _absolutely clear_ however, is that I will
not do a maintainers job for them wrt pushing changes to you.
Sure I'll push the overspill of smaller changes to you (with maintainer
cc'd where necessary), but I won't do Greg's job for eg, when it comes to
USB merging.

> Note that you will notice that it's a _huge_ undertaking, and one of the
> things that Alan complained about was how the fact that _I_ avoid scaling
> meant that he had to scale more. I think it's a very valid complaint, and
> it may make a whole lot more sense (if it is possible) to have different
> people caring about different parts.

I had anticipated this. If other projects I work on have to suffer, so be it.
To reiterate.. for as long as I find it a fun thing to do.

> > "Used to" ? cvs @ vger.samba.org was still being maintained before
> > I went on xmas vacation. Did I miss something ?
> Does he allow the wide and uncoordinated write access that he used to
> allow? I thought he basically shut that down, and only allows a few people
> now, exactly to avoid getting too horrible merge issues..

Sounds accurate. But for the net layer & Sparc arch, it's certainly
proved invaluable. Not only for anyone wanting inside-info on whats
in Davem's queue, but also probably for Davem himself.

Dave.

--
| Dave Jones. http://www.codemonkey.org.uk
| SuSE Labs

2001-12-27 20:23:35

by Linus Torvalds

[permalink] [raw]
Subject: Re: The direction linux is taking


On Thu, 27 Dec 2001, Larry McVoy wrote:
> On Thu, Dec 27, 2001 at 06:05:40PM +0000, Linus Torvalds wrote:
> > Note that things like CVS do not help the fundamental problem at all.
> > They allow automatic acceptance of patches, and positively _encourage_
> > people to "dump" their patches on other people, and not act as real
> > maintainers.
>
> Huh. I'm not sure I understand this. Once you accept a patch into the
> mainline source, are these people still supposed to maintain that patch?

Yes, I actually do expect them to.

It obviously depends on the kind of patch: if it is a one-liner bug-fix,
the patch is pretty much dead (that is, of course, assuming it was a
_correct_ bug-fix and didn't expose any other latent bugs).

But for most things, it's a kind of "Tag, you're it" thing. You're
supposed to support the patch (ie step up and explain what it does if
anybody wonders), and help it evolve. Many patches are only stepping
stones.

(This, btw, is something that Al Viro does absolutely beautifully. I don't
know how many people look at Al's progression of patches, but they are
stand-alone patches on their own, while at the same time _also_ being part
of a larger migration to the inscrutable goals of Al - ie namespaces etc.
You may not realize just _how_ impressive that is, and what a absolute
wonder it is to work with the guy. Poetry in patches, indeed).

> > I know that source control advocates say that using source control makes
> > it easy to revert bad stuff, but that's simply not TRUE. It's _not_
> > easy to revert bad stuff.
>
> It's trivial to revert bad stuff if other stuff hasn't come to depend
> on that bad stuff, assuming a reasonable SCM system.

Well, there's the other part to it - most bad stuff is just "random crap",
and may not have any physical bad tendencies except to make the code
uglier. Then, people don't even realize that they are doing things the
wrong way, because they do cut-and-paste, or they just can't do things the
sane way because the badness assumes a certain layout.

And THAT is where badness is actively hurtful, while not being buggy.
Which is why I'd much rather have people work on maintenance, and not rely
on the bogus argument of "we can always undo it".

Linus

2001-12-27 20:34:06

by Larry McVoy

[permalink] [raw]
Subject: Re: The direction linux is taking

On Thu, Dec 27, 2001 at 12:21:02PM -0800, Linus Torvalds wrote:
>
> On Thu, 27 Dec 2001, Larry McVoy wrote:
> > On Thu, Dec 27, 2001 at 06:05:40PM +0000, Linus Torvalds wrote:
> > > Note that things like CVS do not help the fundamental problem at all.
> > > They allow automatic acceptance of patches, and positively _encourage_
> > > people to "dump" their patches on other people, and not act as real
> > > maintainers.
> >
> > Huh. I'm not sure I understand this. Once you accept a patch into the
> > mainline source, are these people still supposed to maintain that patch?
>
> [Linus stuff]

But this didn't answer my question at all. My question was why is this a
problem related to a source management system? I can see how to exactly
mimic what described Al doing in BK so if that is the definition of goodness,
the addition (or absence) of a SCM doesn't seem to change the answer.

I _think_ what you are saying is that an SCM where your repository is a
wide open black hole with no quality control is a problem, but that's
not the SCM's fault. You are the filter, the SCM is simply an accounting/
filing system.

> > > I know that source control advocates say that using source control makes
> > > it easy to revert bad stuff, but that's simply not TRUE. It's _not_
> > > easy to revert bad stuff.
> >
> > It's trivial to revert bad stuff if other stuff hasn't come to depend
> > on that bad stuff, assuming a reasonable SCM system.
>
> Well, there's the other part to it - most bad stuff is just "random crap",
> and may not have any physical bad tendencies except to make the code
> uglier. Then, people don't even realize that they are doing things the
> wrong way, because they do cut-and-paste, or they just can't do things the
> sane way because the badness assumes a certain layout.
>
> And THAT is where badness is actively hurtful, while not being buggy.
> Which is why I'd much rather have people work on maintenance, and not rely
> on the bogus argument of "we can always undo it".

No argument. In fact, wild agreement. I absolutely *hate* bad crap
being checked into the tree because when it is fixed later it obscures
the original reason for the addition of the code in the first place.
While we rarely reach it, I think we can agree it would be great if code
were checked in once and never modified again because it is perfect.
Obviously a pipe dream, but I think it is the sentiment you are expressing
- don't check in garbage, check in good stuff, and anything that makes
checking in garbage easier is a Bad Thing (tm).

Switching topics just slightly, isn't one of the main problems with SCM
systems that the end user does the merges rather than the maintainer?
Look at how you do it:

a) release tree
b) wait for patches
c) weed through patches looking for good ones
d) apply patches, which means merging in some cases
e) repeat

but your typical SCM has the end user doing the merges, not the maintainer.
If you had an SCM system which allowed the maintainer to do all or some of
the merging, would that help?
--
---
Larry McVoy lm at bitmover.com http://www.bitmover.com/lm

2001-12-27 20:34:15

by Alan

[permalink] [raw]
Subject: Re: The direction linux is taking

> Huh. I'm not sure I understand this. Once you accept a patch into the
> mainline source, are these people still supposed to maintain that patch?
> I would think the patch is now sort of dead, and any subsequent changes

The patch may be dead, but you want a likelyhood that the person who made
the patch will continue to fix it if it added new stuff. If its a bug fix
it may well be dead, if its a driver or a chunk of vm code then it needs
maintaining longer term.

Alan

2001-12-27 20:43:45

by Linus Torvalds

[permalink] [raw]
Subject: Re: The direction linux is taking


On Thu, 27 Dec 2001, Larry McVoy wrote:
> > >
> > > Huh. I'm not sure I understand this. Once you accept a patch into the
> > > mainline source, are these people still supposed to maintain that patch?
> >
> > [Linus stuff]
>
> But this didn't answer my question at all. My question was why is this a
> problem related to a source management system? I can see how to exactly
> mimic what described Al doing in BK so if that is the definition of goodness,
> the addition (or absence) of a SCM doesn't seem to change the answer.

Ok, I see what you are asking for.

No, I'm taking a bigger view. A patch is not just a "patch". A patch has a
lot of stuff around it, one being the unknowable information on whether
the sender of the patch is somebody who will do a good job maintaining the
things the patch impacts.

That's something a source control system doesn't give you - but that
doesn't mean that you cannot use a SCM as a tool anyway.

> I _think_ what you are saying is that an SCM where your repository is a
> wide open black hole with no quality control is a problem, but that's
> not the SCM's fault. You are the filter, the SCM is simply an accounting/
> filing system.

Right. But that's true only if I use SCM as a _personal_ medium, which
doesn't help my external patch acceptance.

So even if I used CVS or BK internally, that's not what people _gripe_
about. People want write access, not just a SCM.

> but your typical SCM has the end user doing the merges, not the maintainer.
> If you had an SCM system which allowed the maintainer to do all or some of
> the merging, would that help?

Well, that's what the filesystem is for me right now ;)

Linus

2001-12-27 20:45:25

by Dana Lacoste

[permalink] [raw]
Subject: RE: The direction linux is taking

> But this didn't answer my question at all. My question was
> why is this a
> problem related to a source management system? I can see how
> to exactly
> mimic what described Al doing in BK so if that is the
> definition of goodness,
> the addition (or absence) of a SCM doesn't seem to change the answer.

> I _think_ what you are saying is that an SCM where your
> repository is a
> wide open black hole with no quality control is a problem, but that's
> not the SCM's fault. You are the filter, the SCM is simply
> an accounting/
> filing system.

<deletia>

> but your typical SCM has the end user doing the merges, not
> the maintainer.
> If you had an SCM system which allowed the maintainer to do
> all or some of
> the merging, would that help?

i think the problem becomes one of usability : is there any
way that the SCM system can be easy enough to use?

or, put another way :
why use the SCM if the features it gives are being supplied
in a completely acceptable manner by the maintainer?
If Linus is doing it on his own, and you're suggesting that
he set the SCM up so that he does it all on his own in the
end anyways, why should he add an extremely obtrusive step
(SCM) to the mix? Why should it be any harder on his day
to day methodology that he's already comfortable with?

If SCM is just a distribution mechanism, then it's not
something that's particularly interesting. If SCM is
only allowing a single user to apply patches, then it's
not particularly useful in reducing the workload of that
person (if they've got the organizational skills to manage
the whole thing, then adding another layer to work through
isn't going to help!)

Don't get me wrong, I'm all for SCM, I just don't think
that applying SCM is going to relieve any patch-confusion
and it's not going to add any real benefit either....

(If, on the other hand, we allowed multiple committers
and access-controlled maintainer lists, then SCM would
be beautiful! but this isn't FreeBSD :) :) :) :) :)

/me ducks for that last comment

Dana Lacoste
Ottawa, Canada

2001-12-27 20:55:49

by Larry McVoy

[permalink] [raw]
Subject: Re: The direction linux is taking

On Thu, Dec 27, 2001 at 12:45:08PM -0800, Dana Lacoste wrote:
> why use the SCM if the features it gives are being supplied
> in a completely acceptable manner by the maintainer?
> If Linus is doing it on his own, and you're suggesting that
> he set the SCM up so that he does it all on his own in the
> end anyways, why should he add an extremely obtrusive step
> (SCM) to the mix? Why should it be any harder on his day
> to day methodology that he's already comfortable with?

Merging is much easier.
Tracking of patches is much easier.
Access control is much easier.
Etc.

> (If, on the other hand, we allowed multiple committers
> and access-controlled maintainer lists, then SCM would
> be beautiful! but this isn't FreeBSD :) :) :) :) :)

Actually, BK can definitely do that. In fact, that's basically exactly what
we have on the hosting service for the PPC tree. There are a list of people
who are administrators, a list of committers, as well as read only access.
The admins are also committers if they want to be, the admins also get to
control who is and is not a committer.

And you dream up as complicated an access control model as you want. We
can do pretty much any model you can describe. Try me, describe a work
flow that you think would be useful, I'll write up how to do it and stick
it on a web page and you can throw stones at it and see if it breaks.
--
---
Larry McVoy lm at bitmover.com http://www.bitmover.com/lm

2001-12-27 20:50:55

by Larry McVoy

[permalink] [raw]
Subject: Re: The direction linux is taking

On Thu, Dec 27, 2001 at 12:41:15PM -0800, Linus Torvalds wrote:
> No, I'm taking a bigger view. A patch is not just a "patch". A patch has a
> lot of stuff around it, one being the unknowable information on whether
> the sender of the patch is somebody who will do a good job maintaining the
> things the patch impacts.
>
> That's something a source control system doesn't give you - but that
> doesn't mean that you cannot use a SCM as a tool anyway.

OK, cool, just checking. We're on the same page.

> > I _think_ what you are saying is that an SCM where your repository is a
> > wide open black hole with no quality control is a problem, but that's
> > not the SCM's fault. You are the filter, the SCM is simply an accounting/
> > filing system.
>
> Right. But that's true only if I use SCM as a _personal_ medium, which
> doesn't help my external patch acceptance.
>
> So even if I used CVS or BK internally, that's not what people _gripe_
> about. People want write access, not just a SCM.

I think it is important to distinguish between BK and CVS. CVS can't do
what you want, BK can.

People can't have write access in CVS for the obvious reasons, the tree
becomes a chaotic mess of stuff that hasn't been filtered. But in BK,
because each workspace is a repository, people inherently have write
access to *their* repository. So they get SCM. And they may eventually
get their stuff into your tree if you ever accept the changeset.

There are problems with this, BK isn't perfect, but it is much closer
to solving the set of problems you are describing that CVS can ever
hope to be.

> > but your typical SCM has the end user doing the merges, not the maintainer.
> > If you had an SCM system which allowed the maintainer to do all or some of
> > the merging, would that help?
>
> Well, that's what the filesystem is for me right now ;)

Yes, and it works great for easy merges. It sucks for complicated merges.
BK can help you a great deal with those merges.
--
---
Larry McVoy lm at bitmover.com http://www.bitmover.com/lm

2001-12-27 21:19:00

by Rik van Riel

[permalink] [raw]
Subject: Re: The direction linux is taking

On Thu, 27 Dec 2001, Troy Benjegerdes wrote:

> Maintainers for a specific area of interest/kernel tree/whatever can run a
> 'canned' set of scripts on a web server which act as a controller for a
> patchbot, and a set of 'build machines' that actually do the compiles.

> http://altus.drgw.net/description.html
>
> I'll volunteer these scripts as well as whatever amount of time I can
> spare from 'real' work ;)

Cool !

> Does anyone else this discussion merits it's own mailing list.. ?

[email protected]

You can subscribe by mailing to [email protected] with
"subscribe patchbot" in the message.

Once I've gotten around to it, http://patchbot.nl.linux.org/
should contain some content, too. (or once somebody else has
gotten around to it)

cheers,

Rik
--
DMCA, SSSCA, W3C? Who cares? http://thefreeworld.net/

http://www.surriel.com/ http://distro.conectiva.com/

2001-12-27 21:21:20

by Jeff Garzik

[permalink] [raw]
Subject: Re: The direction linux is taking

On Thu, Dec 27, 2001 at 07:37:16PM +0100, Dave Jones wrote:
> On Thu, 27 Dec 2001, Linus Torvalds wrote:
> > We've seen this several times in Linux - David, for example, used to
> > maintain his CVS tree, and he ended up being rather frustrated about
> > having to then maintain it all and clean up the bad parts because I
> > didn't want to apply them (and he didn't really want me to) and he
> > couldn't make people clean up themselves because "once it was in, it was
> > in".
>
> "Used to" ? cvs @ vger.samba.org was still being maintained before
> I went on xmas vacation. Did I miss something ?

Kinda-sorta...

vger cvs is maintained by DaveM and current, but one catches holy hell
from Dave if the non-DaveM patches in vger are not merged into the
Linus/Marcelo trees rapidly ;-) So in that way vger cvs is not really a
branch but a staging area for the official tree.

Jeff


2001-12-27 21:14:20

by Troy Benjegerdes

[permalink] [raw]
Subject: Re: The direction linux is taking

On Thu, Dec 27, 2001 at 12:12:54PM -0800, Linus Torvalds wrote:
>
> On Thu, 27 Dec 2001, Rik van Riel wrote:
> > > came out, and emailed me when the patch no longer applies.
> >
> > ... or compiles, or applies with an offset
>
> Good.
>
> We actually talked inside Transmeta about doing a lot of this automation
> centralized (and OSDL took up some of that idea), but yes, from a resource
> usage sanity standpoint this is something that _trivially_ can be done at
> the sending side, and thus scales out perfectly (while trying to do it at
> the receiving end requires some _mondo_ hardware that definitely doesn't
> scale, especially for the "compiles cleanly" part).

So here's an idea:

Maintainers for a specific area of interest/kernel tree/whatever can run a
'canned' set of scripts on a web server which act as a controller for a
patchbot, and a set of 'build machines' that actually do the compiles.

(i.e., davej, andrea, riel, etc would have their own webserver which
acts as a central location for data collection, as well as a place for
users to download stuff from)

Actually compiling gets done either by users that want to use that kernel,
or in the case of a vendor, an internal build farm. The users have another
'canned' script that downloads the kernel, patches it, and builds it with
a user-supplied or server-supplied config file. The script uploads the
results of the build so maintainers can see what happened, and the web
server provides some mechanism for users to say what did and did not work.

Once the webserver gets some data back, the patchbot can figure out
whether a particular patch was a 'success' or not, and decide whether to
send it, dequeue it, or whatever.

We should probably also add the ability for end-users to submit their own
patches to a maintainer, or provide a way for end-users to setup the
webserver system so they can do the same thing the maintainers are doing.


The most important part here is that this system has to be less work for
maintainers than it is responding to hundreds of emails and checking if a
patch made it in all the time. (I think this should be relatively easy).
It's got to be easy to set up, both for maintainers and users.


I've got some reasonably nice python scripts that currently act as the
'build system' part of this, and some somewhat ugly scripts that run on a
webserver. A brief description is available here.

http://altus.drgw.net/description.html

I'll volunteer these scripts as well as whatever amount of time I can
spare from 'real' work ;)

Does anyone else this discussion merits it's own mailing list.. ?

--
Troy Benjegerdes | master of mispeeling | 'da hozer' | [email protected]
-----"If this message isn't misspelled, I didn't write it" -- Me -----
"Why do musicians compose symphonies and poets write poems? They do it
because life wouldn't have any meaning for them if they didn't. That's
why I draw cartoons. It's my life." -- Charles Schulz

2001-12-27 21:24:40

by Dana Lacoste

[permalink] [raw]
Subject: RE: The direction linux is taking

Mostly playing devil's advocate here :)

> Merging is much easier.

how exactly? the actual merge is done
from a patch which if it isn't cleanly
applied then it's probably not wanted
anyways :)

(on the other hand SCM makes it MUCH easier
to deal with the 'cleanly applied' part :)

> Tracking of patches is much easier.

not really : how do you make sure that all the
correct patches have been applied? All SCM
lets you gain is knowing what patches have
been applied, not what patches were NOT applied.

> Access control is much easier.

but if it's only Linus, then this is a moot point :)

> > (If, on the other hand, we allowed multiple committers
> > and access-controlled maintainer lists, then SCM would
> > be beautiful! but this isn't FreeBSD :) :) :) :) :)

> Actually, BK can definitely do that.

I HOPE SO! it's kinda the whole basic essential component for
any multi-user SCM system! The problem isn't that BK can't,
but that Linus won't :) :) :)

> In fact, that's basically exactly what
> we have on the hosting service for the PPC tree. There are a
> list of people
> who are administrators, a list of committers, as well as read
> only access.
> The admins are also committers if they want to be, the admins
> also get to
> control who is and is not a committer.

which is pretty much what FreeBSD (for example) does.
of course they're using CVS (and we won't go there :)

> And you dream up as complicated an access control model as
> you want. We
> can do pretty much any model you can describe. Try me,
> describe a work
> flow that you think would be useful, I'll write up how to do
> it and stick
> it on a web page and you can throw stones at it and see if it breaks.

ok, i have to go learn bitkeeper now so i can answer this
intelligently, but i'll give some examples from perforce
(which is what i'm using now :)

Common task 1 : usability
perforce tracks what everyone has. this means that if you want
to do a sync to current, it only gives you what's changed since
your last sync, because it knows, and you didn't do anything
without telling it, right?

well, what if i'm working on 2.5 stuff for my magical danaDriver?
It's really intense, has a lot of files all over the place, and I
don't want to hurt anything. Then someone asks me about the
interaction between danaDriver and reiserFS in 2.4.17 and I want
to make sure I can see exactly what they're talking about.

so what i want to do (and i can't do in perforce, well, not easily)
is to make a clean checkout of the last 'official release' of the
whole project from SCM _without_ affecting my workspace.

i.e. i do NOT want to create a new workspace, i do not want to create
a special directory, i just want to do :
mkdir linux
cvs checkout linux -tag 2.4.17
and get the whole linux-2.4.17 source code, without affecting my working
directory 'linux-2.5' which also has the linux/* (main)branch checked out.

In perforce, for example, I have to :
(yes, with branches this can be done MUCH easier, but i'm trying to
prove a point here :)
1 - make a new client spec. although this is effectively a plaintext file
and can be automated with scripts, it's really dumb that i have to do
this.
2 - set my environment variables to use this new client spec.
3 - run a p4 sync -tag 2.4.17 (the server says "hey, there's no files
there!"
and checks out the whole thing for me)
4 - change my environment variables back and go on working in 2.5
danaDriver.
essentially having 2 workspaces, with the environment variable the diff
between them.

of course there's no .CVS directories here, as perforce doesn't use them,
so the checkout is 'clean' :)

Common Task 2 : Accountability

This is something perforce does REALLY well.
I can do this :
- set it up so anyone can submit a change request [patch] but it has
to be approved by the directory/file's owner first. if it's not
approved, it can't be submitted.
this means that ANYONE can submit a patch, and everyone gets to
participate, and accountability is maintained.
- set it up so that interested parties get notified of every change
that is submitted, including a web link to the full diffs of that
change. notification is file/directory based, and can have 'excludes'
so rik can get notified every time 'virtualmemory.c' is changed so
that way he can start flaming andrea right away :) :) :) :) :)

Can you do these things with bitkeeper? (yeah, i'll go read the website
info :)

Dana Lacoste
Ottawa, Canada

2001-12-27 21:29:00

by Richard Gooch

[permalink] [raw]
Subject: Re: The direction linux is taking

Rik van Riel writes:
> On Thu, 27 Dec 2001, Richard Gooch wrote:
>
> > If you get this working nicely, it might even be a generally useful
> > thing. A set of perl scripts and easy interface commands could prove
> > popular. I would certainly find it convenient to have a patch
> > retransmission system that re-sent patches every time a new pre-patch
> > came out, and emailed me when the patch no longer applies.
>
> ... or compiles, or applies with an offset

Yes, that was implied. If a patch doesn't apply 100% cleanly with no
fuzz, it should be de-queued and the patch log sent back to me. I'd
want to go in manually and see what's changed (harsh, but reduces the
potential for bit-rot).

> > If it could automatically de-queue when the patch is applied, or when
> > I manually remove it, that would be even better.
>
> ... or when somebody replies to the patch and the reply
> gets caught by a program invoked from your .procmailrc

Yes. But be careful here. I don't want a DoS attack on my queue just
by having people reply to patch announcements.

> If Linus replies he has seen the patch, don't keep
> bombing him.

Yes, although it might be hard to classify properly. A message like
"not now, I'm busy, send it later" shouldn't de-queue.

> Of course when the patch gets dequeued, the program should
> send you a mail with the reason.

Always. I think I'd also want to be notified every time the patch is
sent upstream. I like to know things are still working.

> > And if I make an update to a queued patch, it obsoletes the old one,
> > that would be good too.
>
> Good one, this needs to be added.
>
> Any more requirements / ideas / volunteers / ... ?

I was thinking some more about this in the shower. I'd want some kind
of hierarchical tree processing, that supports multiple kernel
versions (and hence upstream maintainers) and multiple patches per
kernel.

$ROOT/
MAINTAINER (file with my email address)
v2.4/
UPSTREAM-MAINTAINER (file with email address)
official (symlink to kernel/v2.4)
build (symlink to build tree)
configs/ (directory of config files to try)
proj1/
patch.gz (symlink elsewhere)
.status
comment (file with "Hi, Marcelo, this does...")
configs/ (directory of config files to try)
proj1/
patch.gz (symlink elsewhere)
.status

v2.5/
UPSTREAM-MAINTAINER (file with email address)
official (symlink to kernel/v2.5)
build (symlink to build tree)

You get the idea. The config files are used to do test compilations,
and it would be nice if I could tag some config files so that the
resultant kernel and modules are moved into some other place. Perhaps
just by invoking a user-specified script. The per-project config files
supplement the per kernel-version config files.

The script which manages this should be lightweight enough to process
the tree every minute (so that newly queued patches are sent quickly)
and should be designed to also work with a .procmailrc recipe that is
called when your local kernel.org mirror is updated.

Probably files like MAINTAINER and UPSTREAM-MAINTAINER should be
scanned for in every directory, so that files further down override
ones above. Maybe I have a networking patch that I want to send to
Dave rather than to Linus.

Regards,

Richard....
Permanent: [email protected]
Current: [email protected]

2001-12-27 21:43:53

by Troy Benjegerdes

[permalink] [raw]
Subject: Re: The direction linux is taking


[snip]

> > > but your typical SCM has the end user doing the merges, not the maintainer.
> > > If you had an SCM system which allowed the maintainer to do all or some of
> > > the merging, would that help?
> >
> > Well, that's what the filesystem is for me right now ;)
>
> Yes, and it works great for easy merges. It sucks for complicated merges.
> BK can help you a great deal with those merges.

There is a point to be made though that if *Linus* has to do a complicated
merge, the 'patch' that caused the merge should probably be suspect in the
first place.

The person sending the patch should be the one responsible for resolving a
complicated merge. If BK makes that easier, great. HOWEVER, I don't really
want Linus to be using some tool that does automerging.. No SCM system and
automerge tool is going to understand what the code *means*, unless it's
got a compiler integrated into it.

I've had some strange things happen on a BK automerge in the past, and I
don't trust any automated system that doesn't understand the code to not
make some subtle semantic mistake. (Mind you, when strange things
happened, the code usually worked, and I didn't notice until I tried to
*manually* prepare a 'patch' to send upstream)


--
Troy Benjegerdes | master of mispeeling | 'da hozer' | [email protected]
-----"If this message isn't misspelled, I didn't write it" -- Me -----
"Why do musicians compose symphonies and poets write poems? They do it
because life wouldn't have any meaning for them if they didn't. That's
why I draw cartoons. It's my life." -- Charles Schulz

2001-12-27 21:40:14

by Larry McVoy

[permalink] [raw]
Subject: BK stuff [was Re: The direction linux is taking]

My apologies to the list if this is considered off topic. I changed the
subject so you can kill this thread if you like. I know how popular BK
discussions are :-)

On Thu, Dec 27, 2001 at 01:24:27PM -0800, Dana Lacoste wrote:
> Mostly playing devil's advocate here :)
>
> > Merging is much easier.
>
> how exactly?

It's probably an off topic conversation, but in the interesting case (i.e.,
it won't patch cleanly), both text based and GUI based tools are available
to help with the merge. They are better than anything you're used to or
I'll eat my hat. For example, if you are a CVS user, you are used to this:

<<<<<<< local slib.c 1.645
sc = sccs_init(file, INIT_NOCKSUM|INIT_SAVEPROJ, s->proj);
assert(HASGRAPH(sc));
sccs_sdelta(sc, sccs_ino(sc), file);
<<<<<<< remote slib.c 1.642.2.1
sc = sccs_init(file, INIT_NOCKSUM|INIT_SAVEPROJ, p);
assert(sc->tree);
sccs_sdelta(sc, sc->tree, file);
>>>>>>>

but we can give you this:

<<<<<<< local slib.c 1.642.1.6 vs 1.645
sc = sccs_init(file, INIT_NOCKSUM|INIT_SAVEPROJ, s->proj);
- assert(sc->tree);
- sccs_sdelta(sc, sc->tree, file);
+ assert(HASGRAPH(sc));
+ sccs_sdelta(sc, sccs_ino(sc), file);
<<<<<<< remote slib.c 1.642.1.6 vs 1.642.2.1
- sc = sccs_init(file, INIT_NOCKSUM|INIT_SAVEPROJ, s->proj);
+ sc = sccs_init(file, INIT_NOCKSUM|INIT_SAVEPROJ, p);
assert(sc->tree);
sccs_sdelta(sc, sc->tree, file);
>>>>>>>

Why is that better? It's essentially two inline context diffs, so you can
see what each side did. Much easier to merge when you can tell what is
going on.

The GUI tools give you the second style as well as some extra windows
so you can see the checkin comments associated with both the deleted and
the added lines, which gives you yet more information.

> (on the other hand SCM makes it MUCH easier
> to deal with the 'cleanly applied' part :)
>
> > Tracking of patches is much easier.
>
> not really : how do you make sure that all the
> correct patches have been applied? All SCM
> lets you gain is knowing what patches have
> been applied, not what patches were NOT applied.

You're assuming the common denominator of SCM systems. That's not BK.
The easiest way to see if your patch is in Linus's BK tree is this:

bk push -nl bk://bk.kernel.org/released

That will list everything in your tree which is not in his tree. There are
other ways to do it as well, but you get the idea.

> > Access control is much easier.
>
> but if it's only Linus, then this is a moot point :)

But it isn't only Linus. Again, you are assuming a CVS/Perforce/etc style
SCM system. BK isn't like that. Each workspace is a repository and you
can move data between them in any way that you want. That's the whole
point, you can put staging areas between Linus and the unwashed masses.
And if the structure you come up with turns out to be wrong, hey, no
worries, just change it. BK does the right thing.

> ok, i have to go learn bitkeeper now so i can answer this
> intelligently, but i'll give some examples from perforce

I strongly urge you to go to http://www.bitkeeper.com, click on Test Drive,
download BK, and try it. It walks you through all the basics as well
as a merge, demo-ing the GUI file merge.

> Common task 1 : usability
> perforce tracks what everyone has. this means that if you want
> to do a sync to current, it only gives you what's changed since
> your last sync, because it knows, and you didn't do anything
> without telling it, right?

Right.

> well, what if i'm working on 2.5 stuff for my magical danaDriver?
> It's really intense, has a lot of files all over the place, and I
> don't want to hurt anything. Then someone asks me about the
> interaction between danaDriver and reiserFS in 2.4.17 and I want
> to make sure I can see exactly what they're talking about.
>
> so what i want to do (and i can't do in perforce, well, not easily)
> is to make a clean checkout of the last 'official release' of the
> whole project from SCM _without_ affecting my workspace.
>
> i.e. i do NOT want to create a new workspace, i do not want to create
> a special directory, i just want to do :
> mkdir linux
> cvs checkout linux -tag 2.4.17
> and get the whole linux-2.4.17 source code, without affecting my working
> directory 'linux-2.5' which also has the linux/* (main)branch checked out.

Trivial in bk, do a "bk help clone". And it doesn't have to be a tagged
release either, all changes (aka cvs commits) are reproducible snapshots
of the tree. It's like you followed every cvs commit with a cvs tag
except lots lots faster.

> This is something perforce does REALLY well.
> I can do this :
> - set it up so anyone can submit a change request [patch] but it has
> to be approved by the directory/file's owner first. if it's not
> approved, it can't be submitted.

bk help triggers

> this means that ANYONE can submit a patch, and everyone gets to
> participate, and accountability is maintained.

Yup.

> - set it up so that interested parties get notified of every change
> that is submitted, including a web link to the full diffs of that
> change. notification is file/directory based, and can have 'excludes'
> so rik can get notified every time 'virtualmemory.c' is changed so
> that way he can start flaming andrea right away :) :) :) :) :)

bk help triggers
http://linux.bkbits.net
--
---
Larry McVoy lm at bitmover.com http://www.bitmover.com/lm

2001-12-27 21:53:43

by Larry McVoy

[permalink] [raw]
Subject: Re: The direction linux is taking

On Thu, Dec 27, 2001 at 03:43:18PM -0600, Troy Benjegerdes wrote:
> > Yes, and it works great for easy merges. It sucks for complicated merges.
> > BK can help you a great deal with those merges.
>
> There is a point to be made though that if *Linus* has to do a complicated
> merge, the 'patch' that caused the merge should probably be suspect in the
> first place.

That doesn't work for a stream of N patches, which is exactly what a
maintainer deals with. In order for your way to work, Linus has to do
a release each time he applies a patch (or more accurately, before he
tries to apply a patch which touches the same files).

> The person sending the patch should be the one responsible for resolving a
> complicated merge.

Actually, no. If you are making a simple change and a set of complicated
changes have been made, I don't want you, the naive developer in this
example, doing the merge. I, the maintainer in this example, am much
more able to make the right call on the merge.

It's pretty pointless to argue which way is better because the answer is
"it depends". So you want a system that lets you do it the right way
no matter what that right way is.

Regardless, BK doesn't enforce either model, it can do either way, and it
could send the patch and the merge as another patch, allowing Linus to
redo the merge or accept the merge.

> If BK makes that easier, great. HOWEVER, I don't really
> want Linus to be using some tool that does automerging.. No SCM system and
> automerge tool is going to understand what the code *means*, unless it's
> got a compiler integrated into it.

You can turn the automerging off in BK, it's a command line option to
pull and resolve. I tend to agree with you that automerging gets you
in trouble, I look at each diff myself.

> I've had some strange things happen on a BK automerge in the past, and I
> don't trust any automated system that doesn't understand the code to not
> make some subtle semantic mistake. (Mind you, when strange things
> happened, the code usually worked, and I didn't notice until I tried to
> *manually* prepare a 'patch' to send upstream)

I think you may be comparing against the older automerge code. We rewrote
all of that and it's only in the bk-2.1.x series under the Dev directory
in the download area.

We've actually looked at each and every one of the merges in the PPC/Linux
tree and made sure the new code did the right thing. I'm not advocating
that you use it or not, you can choose, but if you see a "strange thing"
using bk-2.1.x then tell us, if it's wrong, we'll fix it. We've run about
40,000 merges through the new code so we're reasonably sure we have covered
the bases but if you can show us where we haven't, we'll thank you and then
fix it.
--
---
Larry McVoy lm at bitmover.com http://www.bitmover.com/lm

2001-12-27 22:00:33

by Troy Benjegerdes

[permalink] [raw]
Subject: BK scales, Bitmover doesn't [was Re: BK stuff ]

On Thu, Dec 27, 2001 at 01:39:51PM -0800, Larry McVoy wrote:
> My apologies to the list if this is considered off topic. I changed the
> subject so you can kill this thread if you like. I know how popular BK
> discussions are :-)

(I'm quite sure this is off-topic, but oh well :-/ )

I want to make a note about something that concerns me, and I haven't
really seen discussed much..

BK is quite nice, and from it's design, could probably scale to haveing
every linux developer on the planet using the same 'base' linux kernel
tree with no problem. (Please don't argue with this point, it's not what
I'm concerned about)

However, the real problem I see is that althought Bitkeeper (the product)
scales very well, Bitmover (the company) does NOT. Bitmover needs income
to scale, and I'm worried that if BK takes off for kernel development,
the demands on Bitmover from kernel developers will far outstrip the
increase in income they get from 'commercial' developers. If this happens,
it's only going to end in everyone getting pissed off.

I can't think of any 'win/win' solutions for this, problem only
'lose/lose' ones. Is anyone else worried about this, or am I just a
pessimist?

--
Troy Benjegerdes | master of mispeeling | 'da hozer' | [email protected]
-----"If this message isn't misspelled, I didn't write it" -- Me -----
"Why do musicians compose symphonies and poets write poems? They do it
because life wouldn't have any meaning for them if they didn't. That's
why I draw cartoons. It's my life." -- Charles Schulz

2001-12-27 22:24:22

by Larry McVoy

[permalink] [raw]
Subject: Re: BK scales, Bitmover doesn't [was Re: BK stuff ]

On Thu, Dec 27, 2001 at 03:59:56PM -0600, Troy Benjegerdes wrote:
> (I'm quite sure this is off-topic, but oh well :-/ )

Seems like your outgoing mail filter needs some work if you know it's
off topic and you post anyway, but what the hey, it's Christmas.

> However, the real problem I see is that althought Bitkeeper (the product)
> scales very well, Bitmover (the company) does NOT. Bitmover needs income
> to scale, and I'm worried that if BK takes off for kernel development,
> the demands on Bitmover from kernel developers will far outstrip the
> increase in income they get from 'commercial' developers. If this happens,
> it's only going to end in everyone getting pissed off.

This is way off topic. I could make similar claims about people using
what your company, Monta Vista, does but you don't see me posting in the
kernel list about their layoffs, business practices, etc. I certainly
could, but it shows no class (not that I've been accused of having lots
of class, but FUD seems too tasteless for me).

Regardless, to put minds at ease, we're fine. While we would welcome
more revenue (who wouldn't?), we've never had a layoff in our 4 year
history and aren't planning any. In addition, we've managed to support
you and the PPC team for almost 2 years without it being a problem,
I'm not sure why it should become a problem now. Oh yeah, tack on MySQL
as well, that's been under BK for longer than Linux/PPC. Of course, if
you are worried about it, since Monta Vista has gotten so much benefit
out of BK, they could help ensure the continued development by buying
a support contract. Hint, hint.

What if we do go out of business? What's wrong with that? If we go
under, BK reverts to a pure GPL license. That can't be a problem,
right?

Seems to me it's a win/win. We either stick around and support it because
the business model is sound, or we go under and you get it like any other
open source product. Yeah, it's better if we stick around because BK
is pretty complex, but if the open source crowd can handle the kernel,
gcc, X, etc, they can handle the BK source base, so I really don't see
the problem here. What am I missing?
--
---
Larry McVoy lm at bitmover.com http://www.bitmover.com/lm

2001-12-27 23:10:05

by Troy Benjegerdes

[permalink] [raw]
Subject: Re: BK scales, Bitmover doesn't [was Re: BK stuff ]

On Thu, Dec 27, 2001 at 02:23:59PM -0800, Larry McVoy wrote:
> On Thu, Dec 27, 2001 at 03:59:56PM -0600, Troy Benjegerdes wrote:
> > (I'm quite sure this is off-topic, but oh well :-/ )
>
> Seems like your outgoing mail filter needs some work if you know it's
> off topic and you post anyway, but what the hey, it's Christmas.
>
> > However, the real problem I see is that althought Bitkeeper (the product)
> > scales very well, Bitmover (the company) does NOT. Bitmover needs income
> > to scale, and I'm worried that if BK takes off for kernel development,
> > the demands on Bitmover from kernel developers will far outstrip the
> > increase in income they get from 'commercial' developers. If this happens,
> > it's only going to end in everyone getting pissed off.
>
> This is way off topic. I could make similar claims about people using
> what your company, Monta Vista, does but you don't see me posting in the
> kernel list about their layoffs, business practices, etc. I certainly
> could, but it shows no class (not that I've been accused of having lots
> of class, but FUD seems too tasteless for me).

<disclaimer>I am NOT trying to represent MontaVista here in any way.. I'd
have these same issues if I was working for them or not. I really didn't
want to bring MontaVista into this due to previous incidents.</disclaimer>

I suppose you could make similiar claims. However, there is a very
important and subtle difference. MontaVista is NOT in any position to
tell developers using 'MontaVista' kernels that they must STOP using our
kernel, since it is GPL'ed.

Bitmover, however, is VERY MUCH in a position to tell developers to STOP
using Bitkeeper. As a matter of fact, it's in your license.

> Regardless, to put minds at ease, we're fine. While we would welcome
> more revenue (who wouldn't?), we've never had a layoff in our 4 year
> history and aren't planning any. In addition, we've managed to support
> you and the PPC team for almost 2 years without it being a problem,
> I'm not sure why it should become a problem now. Oh yeah, tack on MySQL
> as well, that's been under BK for longer than Linux/PPC. Of course, if
> you are worried about it, since Monta Vista has gotten so much benefit
> out of BK, they could help ensure the continued development by buying
> a support contract. Hint, hint.
>
> What if we do go out of business? What's wrong with that? If we go
> under, BK reverts to a pure GPL license. That can't be a problem,
> right?

But potentially not for 6 months, during which time the use of bitkeeper
is legally dubious, and probably not possible without altering the binary
(i.e., if openlogging.org goes down), opening up another mess.

> Seems to me it's a win/win. We either stick around and support it because
> the business model is sound, or we go under and you get it like any other
> open source product. Yeah, it's better if we stick around because BK
> is pretty complex, but if the open source crowd can handle the kernel,
> gcc, X, etc, they can handle the BK source base, so I really don't see
> the problem here. What am I missing?

If you don't stick around, OR get unhappy with us using BK, we have a
problem. Yes, you have some very nice fallbacks, which I thank you for,
but the fallbacks are still going to cause a great deal of pain.

The real problem is what if you have 300 kernel developers that suddenly
start costing you support costs of $5,000 a month?

According to the license, that's only 4 months before the 'group of
licensees' using BK for the kernel cost you $20,000, at which point the
BKL allows you to cut them off.

If Bitmover ever has to tell someone to quit using BK under the BKL, that,
IMHO, is a lose/lose situation, for everyone.

--
Troy Benjegerdes | master of mispeeling | 'da hozer' | [email protected]
-----"If this message isn't misspelled, I didn't write it" -- Me -----
"Why do musicians compose symphonies and poets write poems? They do it
because life wouldn't have any meaning for them if they didn't. That's
why I draw cartoons. It's my life." -- Charles Schulz

2001-12-27 23:40:48

by Larry McVoy

[permalink] [raw]
Subject: Re: BK scales, Bitmover doesn't [was Re: BK stuff ]

[Troy's worries deleted]

Unless someone other than Troy objects, I think this is better handled
offline. Troy has some issues that have nothing to do with the kernel
team. If anyone has any issues with our products, please take them up
with me, the kernel list really doesn't care.
--
---
Larry McVoy lm at bitmover.com http://www.bitmover.com/lm

2001-12-28 02:27:37

by Alexander Viro

[permalink] [raw]
Subject: Re: The direction linux is taking



On Thu, 27 Dec 2001, Larry McVoy wrote:

> But this didn't answer my question at all. My question was why is this a
> problem related to a source management system? I can see how to exactly
> mimic what described Al doing in BK so if that is the definition of goodness,
> the addition (or absence) of a SCM doesn't seem to change the answer.

Urgh. Let me describe what I'm using internally:

a) main object is mutating tree of changesets.
b) each changeset is either very local or a global search-and-replace
job _and_ _nothing_ _else_.
c) main operations: insert empty changeset, modify changeset and
ripple the changes forth, collapse changeset.
d) changesets are stored as patches _and_ set of trees cp -rl'ed
and patched from the baseline. Patches are the stable form. Trees are created
from them by a script, another one rediffs the trees.
e) for obvious reasons these trees are never edited. cp -a, edit
the copy, diff and possibly apply it (or its pieces) to original trees.
Then recreate changesets.
f) when it's time to port to new baseline, I drop the applied
changesets and recreate the trees from the rest. Then rediff. Notice
that due to (b) it's _easy_.

And yes, I deliberately avoid mixing global changes with local ones.
To the point of massaging the code with small changes so that the rest could
be done as a global replacement. Do one thing and do it well, and all such...

It's extra work, but it makes both testing and merges trivial. And
that work includes reordering changesets/massaging them (BTW, reordering is
done as adding empty changeset, pulling changes I want into it and rippling
them forth; then collapsing the old one).

The real difference from BK is that history and tree of changesets
are independent things. It's not a "growing tree", it's "changing tree of
changesets and its previous forms".

Frankly, I'm not too interested in making merges easy. They _are_
easy if you follow a pretty simple self-discipline. And following it has
a lot of very obvious benefits.

BTW, stuff usually goes to Linus in series of 5-10 changesets.
I've put the 2.4 backport of 2.5.0--2.5.1 stuff on ftp.math.psu.edu/pub/viro -
S17-rc1*.tar.gz (three groups). That's how it looks like - backporting
changesets was damn trivial and they _are_ 2.4-mergable. Yup, 34 chunks.
When I will be able to do that with BK (both backport _and_ get them into
the form when they are obviously correct; the latter took a lot of PITA, esp.
the last 14 chunks) - you've got one more user. What's more, the rest of
namespaces patch (things that went into 2.5.2-pre{1,2}) is also 2.4-mergable.
In the peak the damn thing gave 200-odd kilobytes of combined patch. It
got gradually merged into -STABLE, for fsck sake. With no public casualties
(iput fuckup in 2.4.15 was an unrelated patch, but there was an idiotic bug
that slipped into the patches sent to Linus and ate his tree - missed
list_del() in a bad place ;-) And it involved complete rewrite of fs/super.c -
including change of allocation rules, locking, etc. The worst part was
~20 changesets with size of combined patch ~20Kb and sum of individual patch
sizes - about 3 times more than that. Live neurosurgery on core code with
no breakage in process... The only reason why I was able to pull that off
was the changeset massage/reordering/etc. - I'm no fscking genius and no
merge helpers in the world would help here.

If you can split your patch into sequence of obvious changesets -
merge will be easy. If you can't - you are fucked anyway.

PS: before anybody[1] starts whining about extra work - too soddin' bad,
it _is_ part of job, as far as I'm concerned. Avoiding it invariably gives
us a mess - it's not like it never happened [2]

[1] names withheld to protect the guilty
[2] patch names <<--->>

2001-12-28 04:01:43

by Daniel Phillips

[permalink] [raw]
Subject: Re: The direction linux is taking

Hi Richard,

On December 27, 2001 06:59 pm, Richard Gooch wrote:
> But years of observations tells me that Linus likes the way he does
> things and doesn't care if others don't like it. I don't expect to see
> much change there.

Except that, at the kernel summit this year, Linus *did* give the nod to a
patchbot, in principle. I could have sworn.

It seems clear that the challenge is to come up with something that really is
so lightweight and useful that Linus will use it. That's the 'Linus test'.
If somebody hands him a piece of crap to use then why would we expect him to
deviate from his normal behaviour ;-)

--
Daniel

2001-12-29 17:14:47

by Oliver Xymoron

[permalink] [raw]
Subject: Re: The direction linux is taking

On Thu, 27 Dec 2001, Linus Torvalds wrote:

> So even if I used CVS or BK internally, that's not what people _gripe_
> about. People want write access, not just a SCM.

Not true. I for one want to do a 'cvs update' to get current and be able
to look at revision logs and keep my own branches and merge them onto the
tip. Sure, I can do this manually, but an SCM makes this quite a bit
easier.

Other useful tools are things like CVS blame:

http://bonsai.mozilla.org/cvsblame.cgi?file=mozilla/Makefile.in

(not sure how this would be done with single user check-in, but there's
probably a way to hack it in)

--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

2001-12-29 17:27:47

by Larry McVoy

[permalink] [raw]
Subject: Re: The direction linux is taking

On Sat, Dec 29, 2001 at 11:14:18AM -0600, Oliver Xymoron wrote:
> Other useful tools are things like CVS blame:
>
> http://bonsai.mozilla.org/cvsblame.cgi?file=mozilla/Makefile.in
>
> (not sure how this would be done with single user check-in, but there's
> probably a way to hack it in)

We love the "blame" (aka annotate) feature and took it to a new level.
As an old coworker once said of SCM: "You can run, but you can't hide" :-)

We give you every possible variation of annotate in BK. You can see the
annotated listing of any version of a file, and you construct arbitrary
versions of files. The most useful one [1] is "show me the annotated
listing of all lines that have ever been in any version of this file".

You can also grep for stuff in the revision history. From the man page [2]:

To see if <pattern> occurs anywhere in any version of any
file of your tree:

$ bk -r grep -R pattern

To see if it occurs anywhere in the most recent version of
of any file of your tree:

$ bk -r grep pattern

To see if it was added by the most recent delta made in of
any file of your tree:

$ bk -r grep -R+ pattern

[1] http://www.bitkeeper.com/manpages/bk-annotate-1.html
[2] http://www.bitkeeper.com/manpages/bk-grep-1.html
--
---
Larry McVoy lm at bitmover.com http://www.bitmover.com/lm

2001-12-29 18:02:59

by Oliver Xymoron

[permalink] [raw]
Subject: Re: The direction linux is taking

On Fri, 28 Dec 2001, Daniel Phillips wrote:

> Hi Richard,
>
> On December 27, 2001 06:59 pm, Richard Gooch wrote:
> > But years of observations tells me that Linus likes the way he does
> > things and doesn't care if others don't like it. I don't expect to see
> > much change there.
>
> Except that, at the kernel summit this year, Linus *did* give the nod to a
> patchbot, in principle. I could have sworn.
>
> It seems clear that the challenge is to come up with something that really is
> so lightweight and useful that Linus will use it. That's the 'Linus test'.
> If somebody hands him a piece of crap to use then why would we expect him to
> deviate from his normal behaviour ;-)

If my understanding of the new kbuild and configure system is correct,
make clean and dep should be largely unnecessary and it should be possible
to build a patchbot that checks for incremental compilability:

for the current kernel release:
unpack tree
build the tree with default options (unprivileged user, obviously)

for each patch in queue:
copy tree (-a)
apply patch
if rejects:
bounce("doesn't cleanly apply to kernel x")
if not includes linux/patch.changelog:
bounce("doesn't include changelog entry")
if not includes linux/patch.config:
bounce("doesn't include possibly empty config options")
for each option in patch.config:
if not (cml2 --new-enable-option-from-command-line-feature option)
bounce("config option x couldn't be enabled")
if not (make): bounce("couldn't build patch: <diags>")
remove copied tree

Optimize above to deal with dependencies in the patch queue.

Then we need a web interface that allows rejects with various canned
messages:

applied, thanks
redundant
obviously incorrect
doesn't meet coding style (partially automatable with lindent)
too ugly to live
breaks code freeze, resubmit later
submit to appropriate maintainer
missing MAINTAINER entry
insufficient documentation
insufficient rationale
forgot configure.help text (automatable)
(custom message, of course)
(drop silently)

--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

2001-12-29 19:06:19

by Christer Weinigel

[permalink] [raw]
Subject: Re: The direction linux is taking

In article <[email protected]> you write:
>If my understanding of the new kbuild and configure system is correct,
>make clean and dep should be largely unnecessary and it should be possible
>to build a patchbot that checks for incremental compilability:
>
>for the current kernel release:
> unpack tree
> build the tree with default options (unprivileged user, obviously)

One thing that should not be forgotten is the risk of trojan horses
here, in practice the Makefile is a shell script, so to apply any
patch and the compile with it would be a bit dangerous. It might be
possible to limit the patchbot to only accept code changes, but
that might remove most of the benefits. Also, I don't know how much
magic one might do with a properly crafted #include statement, such
as "#include /etc/passwd" and then the error message will contain
the encypted password for root (shadow passwords fix this specific
problem, but you get the idea :-)

/Christer
--
"Just how much can I get away with and still go to heaven?"

2001-12-29 19:19:09

by Oliver Xymoron

[permalink] [raw]
Subject: Re: The direction linux is taking

On Sat, 29 Dec 2001, Christer Weinigel wrote:

> In article <[email protected]> you write:
> >If my understanding of the new kbuild and configure system is correct,
> >make clean and dep should be largely unnecessary and it should be possible
> >to build a patchbot that checks for incremental compilability:
> >
> >for the current kernel release:
> > unpack tree
> > build the tree with default options (unprivileged user, obviously)
>
> One thing that should not be forgotten is the risk of trojan horses
> here, in practice the Makefile is a shell script, so to apply any
> patch and the compile with it would be a bit dangerous. It might be
> possible to limit the patchbot to only accept code changes, but
> that might remove most of the benefits. Also, I don't know how much
> magic one might do with a properly crafted #include statement, such
> as "#include /etc/passwd" and then the error message will contain
> the encypted password for root (shadow passwords fix this specific
> problem, but you get the idea :-)

I think we can devise a suitably secure jail environment, possibly using
UML.

--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

2001-12-29 19:38:20

by Larry McVoy

[permalink] [raw]
Subject: Re: The direction linux is taking

[patchbot stuff]

I normally stay out of these discussions because whatever I say usually
gets taken as "he's just promoting BitKeeper" but I think a point needs
to be made. I promise not to mention BK.



One thing that people seem to be ignoring is that patches tend to need
to be merged. The only way that can not be true is if the baseline is
revved every time a patch is applied, people get the new baseline before
they send in a patch.

If you have N people trying to patch the same file, you'll require N
releases and some poor shlep is going to have to resubmit their patch
N-1 times before it gets in.

If you look at this carefully, you'll see that in order to have an automated
system, you must serialize all development which touches the same files
(or the same areas in the same files if you are willing to automerge,
but automerging outside of an SCM is difficult to say the least).

I think this is basically why systems like what is being proposed fizzle
out; it's certainly come up over and over. The world wants to work in
parallel (think "1000's of Linux developers world wide", yeah, it's BS
but there are certainly a couple hundred). Forcing people to work in
serial isn't the answer.

One way to quantify this is to ask Linus, Alan, Marcelo, et al, how much
time they spend merging, i.e., how often do they get patch rejects?
Regardless of the answer, it will be interesting. If it is a lot,
then the patchbot idea has marginal usefulness. If it is none at all,
then that says development is serialized, which means we may be leaving
a lot of progress on the floor.

I wouldn't be surprised if the serialized case is the answer, or close
to it. It's rare that I hear Open Source leaders complain about merging,
which suggests fairly serialized processes. In the commercial world,
there is a ton of parallel development and merging is about 90% of what
people do when they are interacting with the SCM system. Checkin accounts
for about 8%, and after that it's all over the place.

Anyway, I'm interested to see if there are screams of "all I ever do is
merge and I hate it" or "merging? what's that?".
--
---
Larry McVoy lm at bitmover.com http://www.bitmover.com/lm

2001-12-29 19:58:43

by Oliver Xymoron

[permalink] [raw]
Subject: Re: The direction linux is taking

On Sat, 29 Dec 2001, Larry McVoy wrote:

> [patchbot stuff]

> If you have N people trying to patch the same file, you'll require N
> releases and some poor shlep is going to have to resubmit their patch
> N-1 times before it gets in.

The point is to have N patches queued against rev x that apply cleanly by
themselves before even considering merging. The idea is to cut out as much
bogosity beforehand as possible. Turning them locally into 'changesets' or
whatever to perform the actual merge after deciding which ones you're
going to try applying is orthogonal.

--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

2001-12-29 20:02:13

by Olivier Galibert

[permalink] [raw]
Subject: Re: The direction linux is taking

On Sat, Dec 29, 2001 at 11:37:49AM -0800, Larry McVoy wrote:
> One way to quantify this is to ask Linus, Alan, Marcelo, et al, how much
> time they spend merging, i.e., how often do they get patch rejects?
> Regardless of the answer, it will be interesting. If it is a lot,
> then the patchbot idea has marginal usefulness. If it is none at all,
> then that says development is serialized, which means we may be leaving
> a lot of progress on the floor.

I personally think the merging is distributed, and done by each
subsystem maintainer. When that doesn't happen, and the merge is
non-trivial, we often see a message by Linus essentially saying "your
patch is cool, but it conflicts with another patch by <foo> that does
<bar>, so I've done a new pre with <foo>'s patch, could you merge and
rediff against it?".

OG.

2001-12-29 20:05:13

by Larry McVoy

[permalink] [raw]
Subject: Re: The direction linux is taking

On Sat, Dec 29, 2001 at 01:58:21PM -0600, Oliver Xymoron wrote:
> On Sat, 29 Dec 2001, Larry McVoy wrote:
>
> > [patchbot stuff]
>
> > If you have N people trying to patch the same file, you'll require N
> > releases and some poor shlep is going to have to resubmit their patch
> > N-1 times before it gets in.
>
> The point is to have N patches queued against rev x that apply cleanly

And my point is that your N is likely to be quite small out of a possible
set that is quite large. If I'm right, then the patchbot idea is pointless
because all the interesting work is happening in the part of the set that
the patchbot can't handle.

I make no claims as to where the partition is in Linux, only the maintainers
can tell us that. I do claim that in the commercial world, the set of patches
which apply clean is much much smaller than the set which require merging.

You might want to think about the _fact_ that getting patches to apply
cleanly serializes all work on the area being patched. If you can live
with that, fine. If you can't, the patchbot doesn't help. It solves the
easy part of the problem, which didn't need to be solved, and punts on
the hard part.
--
---
Larry McVoy lm at bitmover.com http://www.bitmover.com/lm

2001-12-29 20:04:33

by Dave Jones

[permalink] [raw]
Subject: Re: The direction linux is taking

On Sat, 29 Dec 2001, Larry McVoy wrote:

> One way to quantify this is to ask Linus, Alan, Marcelo, et al, how much
> time they spend merging, i.e., how often do they get patch rejects?
> Regardless of the answer, it will be interesting. If it is a lot,
> then the patchbot idea has marginal usefulness. If it is none at all,
> then that says development is serialized, which means we may be leaving
> a lot of progress on the floor.

Bringing the 2.4 fixes forward to 2.5 has been brought about a couple
of head-scratching moments when I've had what appear to be parallel
development of the same piece of code in both trees, it's not so much the
rejects, but a case of 'has this already been fixed in a different way'
or 'is this necessary and does it make sense in 2.5?'.

> I wouldn't be surprised if the serialized case is the answer, or close
> to it.

For some things I think it's definitly the right approach.
A single patch with a dozen peoples changes to the same piece of code
would be a royal pita if it then turns out 1 of them is has problems.

Likewise, Alans patches always seemed to isolate anything which could
make diagnosing problems into seperate releases, eg no VFS & IDE updates
in the same patch unless blindingly obviously correct.

> Anyway, I'm interested to see if there are screams of "all I ever do is
> merge and I hate it" or "merging? what's that?".

I've only been keeping this tree since the beginning of the month,
so I'm still trying to find my feet a little, but so far merging is
pretty straightforward and usually painless.

The procedure when Linus/Marcelo release a new patch usually goes..

1. edit the patch to remove any bits that don't make sense
(eg, I have newer/better version in my tree)
2. cat ../patch-2.5.x | patch -p1 --dry-run
3. edit the patch to remove already present hunks.
4. manually fix up rejects in my tree, and remove reject hunk
from the diff.
5. back to (1) until no rejects.
6. cat ../patch-2.5.x | patch -p1
7. testing..
8. Create new diff, and give it a quick readthrough.

Out of all this the initial patch review (step #1) and the final
lookover are by far the most time consuming, and I don't think any
automated tool could speed this up and give me the same level of
understanding over what I'm merging.


Dave.

--
| Dave Jones. http://www.codemonkey.org.uk
| SuSE Labs

2001-12-29 20:31:04

by Oliver Xymoron

[permalink] [raw]
Subject: Re: The direction linux is taking

On Sat, 29 Dec 2001, Larry McVoy wrote:

> On Sat, Dec 29, 2001 at 01:58:21PM -0600, Oliver Xymoron wrote:
> > On Sat, 29 Dec 2001, Larry McVoy wrote:
> >
> > > [patchbot stuff]
> >
> > > If you have N people trying to patch the same file, you'll require N
> > > releases and some poor shlep is going to have to resubmit their patch
> > > N-1 times before it gets in.
> >
> > The point is to have N patches queued against rev x that apply cleanly
>
> And my point is that your N is likely to be quite small out of a possible
> set that is quite large.

Nonsense. X is a release. At a minimum, a submitted patch should apply to
the current globally visible kernel release. If you want your patch to
go in, it has to be current, otherwise no use bothering the
maintainer. And it ought to compile. Anything else should be bounced
without further consideration. If the release gets bumped in a way that
breaks everything in the queue, then everything in the queue goes back to
the drawing board.

> If I'm right, then the patchbot idea is pointless because all the
> interesting work is happening in the part of the set that the patchbot
> can't handle.

The purpose of the patchbot is to bounce patches that don't
apply/compile/meet whatever baseline before Maintainer ever has to look at
them, thus reducing the 'black hole effect' of the overloaded maintainer.

The patchbot doesn't try to do any merging, it simply says "here are the
qualifying candidates for merging with the current release".

In answer to your (still orthogonal) question about how parallel
development is, my impression at least is that the answer is 'very'.

--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

2001-12-29 21:03:59

by Benjamin LaHaise

[permalink] [raw]
Subject: Re: The direction linux is taking

On Sat, Dec 29, 2001 at 11:37:49AM -0800, Larry McVoy wrote:
> If you have N people trying to patch the same file, you'll require N
> releases and some poor shlep is going to have to resubmit their patch
> N-1 times before it gets in.

Wrong. Most patches are independant, and even touch different functions.
Things like "add member foo of type baz to struct z" are independant
changes even if they conflict when patching.

> Anyway, I'm interested to see if there are screams of "all I ever do is
> merge and I hate it" or "merging? what's that?".

How about "I'm sick of resending this one line bugfix to maintainer of
$foo who keeps dropping it"? That's the problem that patchbot is meant
to solve, not the merging problem. If the people responsible for applying
patches were perfect, we wouldn't need it.

-ben

2001-12-29 22:04:34

by Larry McVoy

[permalink] [raw]
Subject: Re: The direction linux is taking

On Sat, Dec 29, 2001 at 04:03:34PM -0500, Benjamin LaHaise wrote:
> On Sat, Dec 29, 2001 at 11:37:49AM -0800, Larry McVoy wrote:
> > If you have N people trying to patch the same file, you'll require N
> > releases and some poor shlep is going to have to resubmit their patch
> > N-1 times before it gets in.
>
> Wrong. Most patches are independant, and even touch different functions.

Really? And the data which shows this absolute statement to be true is
where? I'm happy to believe data, but there is no data here.

> Things like "add member foo of type baz to struct z" are independant
> changes even if they conflict when patching.

So what? A conflict is anything that the patch(1) can't handle automatically.
The fact that the conflicts are independent changes is irrelevant, patch
doesn't care.

> > Anyway, I'm interested to see if there are screams of "all I ever do is
> > merge and I hate it" or "merging? what's that?".
>
> How about "I'm sick of resending this one line bugfix to maintainer of
> $foo who keeps dropping it"? That's the problem that patchbot is meant
> to solve, not the merging problem.

OK, so go solve it already. Just don't be surprised when if it doesn't
get used because it is yet another 10% solution.
--
---
Larry McVoy lm at bitmover.com http://www.bitmover.com/lm

2001-12-29 22:09:44

by Larry McVoy

[permalink] [raw]
Subject: Re: The direction linux is taking

On Sat, Dec 29, 2001 at 02:30:36PM -0600, Oliver Xymoron wrote:
> Nonsense. X is a release. At a minimum, a submitted patch should apply to
> the current globally visible kernel release. If you want your patch to
> go in, it has to be current, otherwise no use bothering the
> maintainer. And it ought to compile.

OK, so this glorious patchbot is going to make sure that a patch patches
cleanly against a known version and compiles. And that buys me exactly
what? Not a heck of a lot. Especially since, as is obvious, if you send
in stuff that doesn't compile consistently, your patches are likely to go
to the back of the line or just get dropped.

> The purpose of the patchbot is to bounce patches that don't
> apply/compile/meet whatever baseline before Maintainer ever has to look at
> them, thus reducing the 'black hole effect' of the overloaded maintainer.

I'd suggest you go try this idea out. It's funny how often people suggest
that they are going to make the problems go away, it's always this same
proposal, typically nobody does any work, when they do it doesn't get used,
could it be there is a reason for that?

I'm prepared to be wrong, but I don't hear the maintainers asking for this
patchbot. Why not?
--
---
Larry McVoy lm at bitmover.com http://www.bitmover.com/lm

2001-12-29 22:27:14

by Dave Jones

[permalink] [raw]
Subject: Re: The direction linux is taking

On Sat, 29 Dec 2001, Larry McVoy wrote:

> I'd suggest you go try this idea out. It's funny how often people suggest
> that they are going to make the problems go away, it's always this same
> proposal, typically nobody does any work, when they do it doesn't get used,
> could it be there is a reason for that?
>
> I'm prepared to be wrong, but I don't hear the maintainers asking for this
> patchbot. Why not?

Something I've done for a quite is set up some procmail rules
to filter subjects with [PATCH] and the likes into an additional
folder. Every so often, I'll go through it, deleting ones that
ended up getting merged with mainline.

The ones that don't, I look around for followups in my l-k
folder for that month. If it got shot down in flames or discarded
for another reason, it gets deleted from my patch folder.

Picking through the rest usually turns up some useful bits
that in the past I've resynced/cleaned up, and pushed to relevant
maintainer and they've finally got accepted.

I'm making two points here.

1. This is not perfect in that a lot of patches get sent to the list
without being prefixed with [PATCH], so unless I happen to see it
when skimming my l-k folder, it falls through the gaps.

2. I don't have time to read every patch that ends up in my
patch folder.

The patchbot solves the first problem by making it not possible to accept
a patch without a standard prefix. The second it could fix by generating
a monthly "pending" report for every person using it, and having these
summaries sent to a patchbot-pending mailing list.

As long as *someone* sees these patches are still out there, they'll get
merged eventually, even if it means sitting in a tree other than Linus's
for a few months.

Dave.

--
| Dave Jones. http://www.codemonkey.org.uk
| SuSE Labs

2001-12-29 22:25:14

by Oliver Xymoron

[permalink] [raw]
Subject: Re: The direction linux is taking

On Sat, 29 Dec 2001, Larry McVoy wrote:

> On Sat, Dec 29, 2001 at 02:30:36PM -0600, Oliver Xymoron wrote:
> > Nonsense. X is a release. At a minimum, a submitted patch should apply to
> > the current globally visible kernel release. If you want your patch to
> > go in, it has to be current, otherwise no use bothering the
> > maintainer. And it ought to compile.
>
> OK, so this glorious patchbot is going to make sure that a patch patches
> cleanly against a known version and compiles. And that buys me exactly
> what? Not a heck of a lot. Especially since, as is obvious, if you send
> in stuff that doesn't compile consistently, your patches are likely to go
> to the back of the line or just get dropped.

It never shows up in the maintainer's inbox, leaving them more time to
address the remainder. And fewer of the increasingly bitter complaints of
dropped patches.

> > The purpose of the patchbot is to bounce patches that don't
> > apply/compile/meet whatever baseline before Maintainer ever has to look at
> > them, thus reducing the 'black hole effect' of the overloaded maintainer.
>
> I'd suggest you go try this idea out. It's funny how often people suggest
> that they are going to make the problems go away, it's always this same
> proposal, typically nobody does any work, when they do it doesn't get used,
> could it be there is a reason for that?
>
> I'm prepared to be wrong, but I don't hear the maintainers asking for this
> patchbot. Why not?

I don't hear them asking for SCM either.

--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

2001-12-29 22:51:17

by Alan

[permalink] [raw]
Subject: Re: The direction linux is taking

> It never shows up in the maintainer's inbox, leaving them more time to
> address the remainder. And fewer of the increasingly bitter complaints of
> dropped patches.

Most mangled diffs I get are caused by pine. Fixing pine would do more
wonders than any magical patchbot. I also get patches and changes from
people who quite genuinely either can't mail me unmangled diffs (eg the
lotus corporate mail policy afflicted) or are from people who may really
know their stuff and even be the vendor but are not familiar with the
patch/diff tools. A robot isn't going to teach them.


Alan

2001-12-29 22:52:07

by Alan

[permalink] [raw]
Subject: Re: The direction linux is taking

> > Wrong. Most patches are independant, and even touch different functions.
>
> Really? And the data which shows this absolute statement to be true is
> where? I'm happy to believe data, but there is no data here.

I rarely get clashes in merges with either 2.2 or with 2.4-ac when I was
doing it. Offsets from multiple patches to the same file happen some times
but its very rare two people had overlapping changes and when it happened
it almost always meant that the two of them needed to talk because they were
fixing the same thing or adding related features.

The big exception is Configure.help which is a nightmare for patch, and the
one file I basically always did hand merges on

2001-12-29 22:52:37

by Alan

[permalink] [raw]
Subject: Re: The direction linux is taking

> to filter subjects with [PATCH] and the likes into an additional
> folder. Every so often, I'll go through it, deleting ones that
> ended up getting merged with mainline.

Right. Its probably worth noting that I do the same and I think Linus does
because he's certainly asked me to put PATCH in the subject line before.

2001-12-29 22:59:47

by Oliver Xymoron

[permalink] [raw]
Subject: Re: The direction linux is taking

On Sat, 29 Dec 2001, Alan Cox wrote:

> > It never shows up in the maintainer's inbox, leaving them more time to
> > address the remainder. And fewer of the increasingly bitter complaints of
> > dropped patches.
>
> Most mangled diffs I get are caused by pine. Fixing pine would do more
> wonders than any magical patchbot.

Unfortunately not an option. Cloning pine, though..

By the way, the 'forgot to include the patch' syndrome seems to be a
common problem with Mutt..

> I also get patches and changes from
> people who quite genuinely either can't mail me unmangled diffs (eg the
> lotus corporate mail policy afflicted) or are from people who may really
> know their stuff and even be the vendor but are not familiar with the
> patch/diff tools. A robot isn't going to teach them.

The other part of patchbot is to auto-bounce stuff off the queue when it
no longer applies to the current rev (garbage collection). And to possibly
simplify manually bouncing patches with common comments. And exposing the
queue to the outside world. Allowing web-based patch upload might work
around the braindead mail agent problem.

--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

2001-12-29 23:09:17

by Alan

[permalink] [raw]
Subject: Re: The direction linux is taking

> The 'kill trailing spaces on each line' bug ? That has been fixed since
> 4.31 or so. Or are there other evils lurking ?

Thats the beast. Last time I looked it was fixed in some vendor trees
but the official pine people were refusing to take it

2001-12-29 23:04:47

by Larry McVoy

[permalink] [raw]
Subject: Re: The direction linux is taking

On Sat, Dec 29, 2001 at 04:24:56PM -0600, Oliver Xymoron wrote:
> > OK, so this glorious patchbot is going to make sure that a patch patches
> > cleanly against a known version and compiles. And that buys me exactly
> > what? Not a heck of a lot. Especially since, as is obvious, if you send
> > in stuff that doesn't compile consistently, your patches are likely to go
> > to the back of the line or just get dropped.
>
> It never shows up in the maintainer's inbox, leaving them more time to
> address the remainder. And fewer of the increasingly bitter complaints of
> dropped patches.

I think it's great that Oliver is volunteering to build this system,
host it, provide the build infrastructure and hardware, that's cool! But
wouldn't it be a whole lot less work to tell people to type make before
they send in the patch?

Doesn't it seem a bit strange to be building a system to make sure that
the people who submit patches have submitted patches which compile?
Is it really true that there are any significant number of patches
submitted that don't even compile?

And while we are the build topic, what platforms get built? What configs?

> > I'm prepared to be wrong, but I don't hear the maintainers asking for this
> > patchbot. Why not?
>
> I don't hear them asking for SCM either.

OK Socrates, nice try, but try and stay focussed and answer the question.
It's right there above your non-answer.
--
---
Larry McVoy lm at bitmover.com http://www.bitmover.com/lm

2001-12-29 23:07:37

by Dave Jones

[permalink] [raw]
Subject: Re: The direction linux is taking

On Sat, 29 Dec 2001, Alan Cox wrote:

> Most mangled diffs I get are caused by pine. Fixing pine would do more
> wonders than any magical patchbot.

The 'kill trailing spaces on each line' bug ? That has been fixed since
4.31 or so. Or are there other evils lurking ?
I change between mutt & pine depending on position of the moon,
and thats the only bug that I'm aware of that yourself (and Linus)
have said "Resend with a non-borked mua"

Dave.

--
| Dave Jones. http://www.codemonkey.org.uk
| SuSE Labs

2001-12-29 23:14:57

by Larry McVoy

[permalink] [raw]
Subject: Re: The direction linux is taking

On Sat, Dec 29, 2001 at 10:58:27PM +0000, Alan Cox wrote:
> > > Wrong. Most patches are independant, and even touch different functions.
> >
> > Really? And the data which shows this absolute statement to be true is
> > where? I'm happy to believe data, but there is no data here.
>
> I rarely get clashes in merges with either 2.2 or with 2.4-ac when I was
> doing it. Offsets from multiple patches to the same file happen some times
> but its very rare two people had overlapping changes and when it happened
> it almost always meant that the two of them needed to talk because they were
> fixing the same thing or adding related features.

So that means that pretty much 100% of development to any one area is being
done by one person?!? That's cool, but doesn't it limit the speed at which
forward progress can be made? And does that mean for any area there is only
one person who really understands it?

I'm not sure that you want single threading of development to be something
enforced by your development process, and that's what it is starting to
sound like more and more. Isn't it true that a lack of merge conflicts
means that there is no parallel development in that area?

We have lots of commercial customers using BK on the Linux kernel, they
are doing embedded this and that. The rate of change that they make is
much greater than the rate of change made in the Linus maintained tree.
I'm not saying it's good or bad, it's just different. I can say that
merging is a huge issue in commercial shops. It's interesting to hear
that it is not in Linux.

Some sociology guy with a CS background should do a study on this and
explore the differences. Is fast change better? Is slow change better?
--
---
Larry McVoy lm at bitmover.com http://www.bitmover.com/lm

2001-12-29 23:09:47

by Alexander Viro

[permalink] [raw]
Subject: Re: The direction linux is taking



On Sat, 29 Dec 2001, Oliver Xymoron wrote:

> On Sat, 29 Dec 2001, Alan Cox wrote:
>
> > > It never shows up in the maintainer's inbox, leaving them more time to
> > > address the remainder. And fewer of the increasingly bitter complaints of
> > > dropped patches.
> >
> > Most mangled diffs I get are caused by pine. Fixing pine would do more
> > wonders than any magical patchbot.
>
> Unfortunately not an option. Cloning pine, though..

Set it to use vi as editor, rm `which pico` and be happy - usually patch
is mangled by pico, not pine itself.

..ooO(and if somebody starts whining "but puko is user-friendly" you don't want
their patches anyway)

2001-12-29 23:24:48

by Dave Jones

[permalink] [raw]
Subject: Re: The direction linux is taking

On Sat, 29 Dec 2001, Alan Cox wrote:

> > The 'kill trailing spaces on each line' bug ? That has been fixed since
> > 4.31 or so. Or are there other evils lurking ?
> Thats the beast. Last time I looked it was fixed in some vendor trees

Ah, explains why I've not seen it happen for a while.

> but the official pine people were refusing to take it

*sigh*
Another reason to be a mutt advocate I suppose 8)

Dave.

--
| Dave Jones. http://www.codemonkey.org.uk
| SuSE Labs

2001-12-29 23:30:08

by Oliver Xymoron

[permalink] [raw]
Subject: Re: The direction linux is taking

On Sat, 29 Dec 2001, Larry McVoy wrote:

> On Sat, Dec 29, 2001 at 04:24:56PM -0600, Oliver Xymoron wrote:
> > > OK, so this glorious patchbot is going to make sure that a patch patches
> > > cleanly against a known version and compiles. And that buys me exactly
> > > what? Not a heck of a lot. Especially since, as is obvious, if you send
> > > in stuff that doesn't compile consistently, your patches are likely to go
> > > to the back of the line or just get dropped.
> >
> > It never shows up in the maintainer's inbox, leaving them more time to
> > address the remainder. And fewer of the increasingly bitter complaints of
> > dropped patches.
>
> I think it's great that Oliver is volunteering to build this system,
> host it, provide the build infrastructure and hardware, that's cool! But
> wouldn't it be a whole lot less work to tell people to type make before
> they send in the patch?

Which works until the next rev of the kernel at which point the patch may
not be valid any more.

> Is it really true that there are any significant number of patches
> submitted that don't even compile?

No, it was mostly just pointing out that it was possible to do with kbuild
and CML2 in O(patch) rather than O(whole kernel). Testing applicability of
the patch is 80% of the work.

> And while we are the build topic, what platforms get built? What configs?

Up to whoever $maintainer happens to be. If DaveM runs a patch queue
the answer probably includes Sparc, otherwise it defaults to x86. The
config question was answered in my original proposal.

> > > I'm prepared to be wrong, but I don't hear the maintainers asking for this
> > > patchbot. Why not?
> >
> > I don't hear them asking for SCM either.
>
> OK Socrates, nice try, but try and stay focussed and answer the question.
> It's right there above your non-answer.

Actually my answer is perfectly to the point: clearly you know exactly why
someone would keep harping on about a patch management solution no one
seems very interested in, so I hardly need to spell it out for you. But
I can add some details:

- Linus has limited bandwidth, and as is becoming clear, so do the
other maintainers
- dropped patches are the primary indicator of this
- people don't implement exponential back-off well, they tend to get
cranky instead
- feedback is a good way to reduce congestion and resends
- the old solution, Jitterbug, was horrid


--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."


2001-12-29 23:34:48

by Oliver Xymoron

[permalink] [raw]
Subject: Re: The direction linux is taking

On Sun, 30 Dec 2001, Dave Jones wrote:

> On Sat, 29 Dec 2001, Alan Cox wrote:
>
> > > The 'kill trailing spaces on each line' bug ? That has been fixed since
> > > 4.31 or so. Or are there other evils lurking ?
> > Thats the beast. Last time I looked it was fixed in some vendor trees
>
> Ah, explains why I've not seen it happen for a while.
>
> > but the official pine people were refusing to take it

The vendors shouldn't even have their own trees according to the license.

> *sigh*
> Another reason to be a mutt advocate I suppose 8)

Don't kid yourself, Mutt's a pile of crap too. And not an alternative for
99% of Pine users.

--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

2001-12-29 23:33:48

by Dave Jones

[permalink] [raw]
Subject: Re: The direction linux is taking

On Sat, 29 Dec 2001, Larry McVoy wrote:

> So that means that pretty much 100% of development to any one area is being
> done by one person?!? That's cool, but doesn't it limit the speed at which
> forward progress can be made?

The closest approximation my minds-eye can make of how things work
look something like this..

h h h h h
\ | | | /
m m m
\ |/
ttt
|
l

h - random j hacker working on same file/subsystem different goals
m - maintainer for file/subsys
t - "forked" tree maintainer (-ac, -dj, -aa etc..)
l - Linus

Whilst development happens concurrently in parallel, the notion of
progress is somewhat serialised as changes work their way down to
Linus.

(This whole thing goes a little astray when random j hacker sends
patches straight to Linus bypassing everyone else and they get
merged, but the controlled anarchy prevails and everyone somehow
gets back in sync).

Dave.

--
| Dave Jones. http://www.codemonkey.org.uk
| SuSE Labs

2001-12-29 23:39:18

by Larry McVoy

[permalink] [raw]
Subject: Re: The direction linux is taking

On Sun, Dec 30, 2001 at 12:33:29AM +0100, Dave Jones wrote:
> On Sat, 29 Dec 2001, Larry McVoy wrote:
>
> > So that means that pretty much 100% of development to any one area is being
> > done by one person?!? That's cool, but doesn't it limit the speed at which
> > forward progress can be made?
>
> The closest approximation my minds-eye can make of how things work
> look something like this..
>
> h h h h h
> \ | | | /
> m m m
> \ |/
> ttt
> |
> l
>
> h - random j hacker working on same file/subsystem different goals
> m - maintainer for file/subsys
> t - "forked" tree maintainer (-ac, -dj, -aa etc..)
> l - Linus
>
> Whilst development happens concurrently in parallel, the notion of
> progress is somewhat serialised as changes work their way down to
> Linus.

In my message above, I specifically asked about any one area, asking if
there was parallel development in that area. So far, noone has said "yes".
If the answer was "yes", somebody in your fanin (nice ascii, BTW :) is
merging. So the answer is either

noone => no parallel development in any one area
or
someone

If it is "someone", who is it?
--
---
Larry McVoy lm at bitmover.com http://www.bitmover.com/lm

2001-12-29 23:36:28

by Larry McVoy

[permalink] [raw]
Subject: Re: The direction linux is taking

On Sat, Dec 29, 2001 at 05:29:52PM -0600, Oliver Xymoron wrote:
> > Is it really true that there are any significant number of patches
> > submitted that don't even compile?
>
> No

OK, so there are no significant numbers of patches that the patchbot will
eliminate, by your admission. So what's the point? What is the problem
you have solved? And where's the code? This sounds like you have whittled
it down to a cgi-script of about 100 lines of perl. How about building it
and demonstrating the usefulness rather than telling us how great it is
going to be?
--
---
Larry McVoy lm at bitmover.com http://www.bitmover.com/lm

2001-12-29 23:42:28

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: Re: The direction linux is taking

Em Sat, Dec 29, 2001 at 05:33:57PM -0600, Oliver Xymoron escreveu:
> On Sun, 30 Dec 2001, Dave Jones wrote:
> > On Sat, 29 Dec 2001, Alan Cox wrote:
> > > > The 'kill trailing spaces on each line' bug ? That has been fixed since
> > > > 4.31 or so. Or are there other evils lurking ?
> > > Thats the beast. Last time I looked it was fixed in some vendor trees
> >
> > Ah, explains why I've not seen it happen for a while.
> >
> > > but the official pine people were refusing to take it
>
> The vendors shouldn't even have their own trees according to the license.

heh, this is getting way off-topic, but last time I checked one can ship
pine with patches if the resulting package has an 'l' at the end, e.g.
pine-4.31L.i386.rpm

> > *sigh*
> > Another reason to be a mutt advocate I suppose 8)
>
> Don't kid yourself, Mutt's a pile of crap too. And not an alternative for
> 99% of Pine users.

Hey, I'm one of the 1% then, switched from pine one year ago and I'm happy
with mutt.

- Arnaldo

2001-12-29 23:47:48

by Dave Jones

[permalink] [raw]
Subject: Re: The direction linux is taking

On Sat, 29 Dec 2001, Larry McVoy wrote:

[You liked the ascii, so you get to see it again 8) ]

> > h h h h h
> > \ | | | /
> > m m m
> > \ |/
> > ttt
> > |
> > l
> >
> > h - random j hacker working on same file/subsystem different goals
> > m - maintainer for file/subsys
> > t - "forked" tree maintainer (-ac, -dj, -aa etc..)
> > l - Linus

> In my message above, I specifically asked about any one area, asking if
> there was parallel development in that area. So far, noone has said "yes".
> If the answer was "yes", somebody in your fanin (nice ascii, BTW :) is
> merging. So the answer is either
> If it is "someone", who is it?

Yes.
"m", and to a lesser extent, "t"

Dave.


--
| Dave Jones. http://www.codemonkey.org.uk
| SuSE Labs

2001-12-29 23:51:18

by Benjamin LaHaise

[permalink] [raw]
Subject: Re: The direction linux is taking

On Sat, Dec 29, 2001 at 03:38:40PM -0800, Larry McVoy wrote:
> In my message above, I specifically asked about any one area, asking if
> there was parallel development in that area. So far, noone has said "yes".
> If the answer was "yes", somebody in your fanin (nice ascii, BTW :) is
> merging. So the answer is either
>
> noone => no parallel development in any one area
> or
> someone
>
> If it is "someone", who is it?

Going back to your original message, the majority of development is
occurring in parallel: networking, sound, video, scsi... all those
drivers are completely independant from the changes to the core kernel
(modulo occasional major changes), and tend not to have conflicts when
they reach the mainstream kernel (maintainers tend to sort that out).

That said, some areas (like VM) are being touched by multiple people,
but we usually don't touch things in a way that conflicts with each
other (just occasionally). Instead, the typical problem is tracking
the state of all the changes and knowing where things are being delayed
or dropped (and why). Is that a sociological problem or a development
problem? Does it need to be solved with people or tools? I don't know
the answer to those questions or the right mix between the two answers.
Certainly, a set of tools that helps avoid these issues by making sure
the first level of problems (merging, test compile, basic testing)
would be a great advancement in productivity for those of us who work
that way, but if the underlying problem is social, it's not going to
help overall.


-ben
--
Fish.

2001-12-29 23:59:48

by Oliver Xymoron

[permalink] [raw]
Subject: Re: The direction linux is taking

On Sat, 29 Dec 2001, Larry McVoy wrote:

> On Sat, Dec 29, 2001 at 05:29:52PM -0600, Oliver Xymoron wrote:
> > > Is it really true that there are any significant number of patches
> > > submitted that don't even compile?
> >
> > No
>
> OK, so there are no significant numbers of patches that the patchbot will
> eliminate, by your admission.

Except for the ones that get garbage collected after each new kernel
release WHEN THE VALIDITY OF THE QUEUE IS RECHECKED. Which will catch all
the duplicates or conflicts that were queued but not applied. See original
pseudo-code (which happened to be Python). And filtering by checking for
apply/compile was only half of the original suggestion.

> So what's the point? What is the problem you have solved? And where's
> the code? This sounds like you have whittled it down to a cgi-script of
> about 100 lines of perl. How about building it and demonstrating the
> usefulness rather than telling us how great it is going to be?

The original suggestion (about the possibility of compile-testing patches
incrementally) was dependent on kbuild and CML2 being in the kernel
already, but I do have a proof-of-concept for the rest in the works.

--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."



2001-12-30 00:04:58

by Larry McVoy

[permalink] [raw]
Subject: Re: The direction linux is taking

On Sat, Dec 29, 2001 at 05:59:15PM -0600, Oliver Xymoron wrote:
> > > > Is it really true that there are any significant number of patches
> > > > submitted that don't even compile?
> > >
> > > No
> >
> > OK, so there are no significant numbers of patches that the patchbot will
> > eliminate, by your admission.
>
> Except for the ones that get garbage collected after each new kernel
> release WHEN THE VALIDITY OF THE QUEUE IS RECHECKED.

Which is how many? Do you have _any_ data which shows that this is going
to do anything? Everything I know says that you're in the 1% area. My
experience is perhaps different than yours, but I'd like to know why.

> > So what's the point? What is the problem you have solved? And where's
> > the code? This sounds like you have whittled it down to a cgi-script of
> > about 100 lines of perl. How about building it and demonstrating the
> > usefulness rather than telling us how great it is going to be?
>
> The original suggestion (about the possibility of compile-testing patches
> incrementally) was dependent on kbuild and CML2 being in the kernel
> already, but I do have a proof-of-concept for the rest in the works.

Great, so set it up, write a parser that grabs all the patches out of the
list, run them through your system, and report back how much it helps.
I don't think it will but if it does and you are willing to do the work,
more power to you.
--
---
Larry McVoy lm at bitmover.com http://www.bitmover.com/lm

2001-12-30 00:26:13

by Oliver Xymoron

[permalink] [raw]
Subject: Re: The direction linux is taking

On Sat, 29 Dec 2001, Larry McVoy wrote:

> On Sat, Dec 29, 2001 at 05:59:15PM -0600, Oliver Xymoron wrote:
> > > > > Is it really true that there are any significant number of patches
> > > > > submitted that don't even compile?
> > > >
> > > > No
> > >
> > > OK, so there are no significant numbers of patches that the patchbot will
> > > eliminate, by your admission.
> >
> > Except for the ones that get garbage collected after each new kernel
> > release WHEN THE VALIDITY OF THE QUEUE IS RECHECKED.
>
> Which is how many? Do you have _any_ data which shows that this is going
> to do anything?

Yes. Every patch that goes by that says 'resynced with kernel x+1'. Which
seems to be a decent fraction of the ones that people care about. And an
ever larger percentage with each release. You've been on the list a few
years, you have the same data I have.

One of the stated problems with jitterbug was that it got cluttered with
stuff that never got kicked off. This is just part of the method of
kicking it off, the other (sigh, would have been nice if you'd read my
original message rather than latching on to a small part of the thread
that followed) was a system for quickly kicking stuff off the queue
manually.

> Great, so set it up, write a parser that grabs all the patches out of the
> list, run them through your system, and report back how much it helps.

My impression is that quite a bit of stuff (the majority?) never shows up
on the list at all, being fed directly to maintainers. And I obviously
can't say much about the manual reject side of things, not being Linus.

--
"Love the dolphins," she advised him. "Write by W.A.S.T.E.."

2001-12-30 02:27:10

by Alan

[permalink] [raw]
Subject: Re: The direction linux is taking

> So that means that pretty much 100% of development to any one area is being
> done by one person?!? That's cool, but doesn't it limit the speed at which

It means that pretty much all development in one area is already being
co-ordinated. It also means that bug fixes tend to be small and not overlap
other changes.

The "central person" thing within groups seems to just evolve

2001-12-30 02:49:51

by Larry McVoy

[permalink] [raw]
Subject: Re: The direction linux is taking

On Sun, Dec 30, 2001 at 02:36:57AM +0000, Alan Cox wrote:
> > So that means that pretty much 100% of development to any one area is being
> > done by one person?!? That's cool, but doesn't it limit the speed at which
>
> It means that pretty much all development in one area is already being
> co-ordinated. It also means that bug fixes tend to be small and not overlap
> other changes.

It also means that your rate of change in a single area is limited to
the output of one human being. Either the human doing the work or
the human doing the merging. So far, it seems more like nobody is
doing any merging, Dave says someone does but nobody else has spoken
up and I tend to think that merging is not a common process in the
Linux tree, the rate of change sort of indicates that. I suspect
that people avoid merge conflicts because they hurt, because of
the poor tools.

"Docter it hurts when I have to merge"
"So don't do that."

Single threading development is fine, it has benefits, but it also
has costs. It's a tradeoff.

No commercial software firm would but up with a development process
which caused that to be the case. They may want it to be the case,
but they want to be able to choose. We used to have pretty good merge
technology, much better than CVS, and we still got beat up all the time
about it until we made it what it is today.

The point there is that if commercial firms coordinated the merging by
hand, either by doing it by hand or avoiding it by hand - one of which
has to be true in the Linux development community - then we would have
never heard a word about our merge technology. In this respect, the
commercial shops are way way ahead of Linux. They can choose to work
like Linux does, that's an option, and yet they don't. It's way too
time consuming for little or no gain. You should think about that.
They learn from you, they're cherry picking your best stuff, what are
you getting from them?

There are others out here who have worked at big OS shops, they can
back this up. Hey, Pete, what would happen at Sun if they took
filemerge away? How long do you think that would last?

I can also tell you that your description does not at all match what
is happening in the Linux/PPC development nor the MySQL development.
They have merge conflicts all the time and we have years of data to prove
it. If you would like the exact numbers for the PPC tree, for example,
I can give them to you. I can also give you numbers for BK itself;
we're a small team but we have merge conflicts daily. We wouldn't
make any progress if we were all waiting on each other all the time.
Nor would we be able to support the code if only one person got to work
on a particular area; that's a dangerous approach, it means that you are
dependent on that one person. I can see it for Linus, he's sort of Mr
Good Taste, but I can't see it making sense for particular sections of
the code. Each section should have at least two active experts.

The point is that while you may live in a world where merging is rare,
but I suspect that's almost certainly caused by poor tools. If that
weren't the case, can you explain why when the PPC team moved over to BK,
we saw lots of merge conflicts? BK didn't make those up, they reflect
what the people did faithfully.
--
---
Larry McVoy lm at bitmover.com http://www.bitmover.com/lm

2001-12-30 03:54:41

by Dave Jones

[permalink] [raw]
Subject: Re: The direction linux is taking

On Sat, 29 Dec 2001, Larry McVoy wrote:

> that people avoid merge conflicts because they hurt, because of
> the poor tools.
>
> "Docter it hurts when I have to merge"
> "So don't do that."

What I've seen happen quite frequently...

developer a: "Hey, here's a patch to do aaaa"

maintainer: "Ah, I've already sent Linus a patch to merge changes from
developer b in this area, but your changes look ok. can you resync when
Linus puts next patch out, and send me the rediffed copy again?"

developer: "Sure, no problem".

Alternatively its maintainer telling developer a "Ah, I've a bunch of
stuff queued from developer b, you guys need to talk on this and work
out how to get this stuff working together, get back to me when you've
work it out".

People management rather than source management systems.

Dave.

--
| Dave Jones. http://www.codemonkey.org.uk
| SuSE Labs

2001-12-30 10:01:14

by Alan

[permalink] [raw]
Subject: Re: The direction linux is taking

> the human doing the merging. So far, it seems more like nobody is
> doing any merging, Dave says someone does but nobody else has spoken

Lots of people do. I get all my wireless, my isdn, my usb
patches all nicely prepacked and merged for example.

> up and I tend to think that merging is not a common process in the
> Linux tree, the rate of change sort of indicates that. I suspect

The primary limit on the rate of change is the rate at which Linus merges
stuff, nothing else.

> is happening in the Linux/PPC development nor the MySQL development.
> They have merge conflicts all the time and we have years of data to prove

For the ppc folks I guess because they are keeping a parallel tree. Thats a
totally different animal because you collide continually with things you've
submitted and changes in the core tree.

Alan

2001-12-31 08:45:16

by Daniel Phillips

[permalink] [raw]
Subject: Re: The direction linux is taking

On December 29, 2001 11:04 pm, Larry McVoy wrote:
> On Sat, Dec 29, 2001 at 04:03:34PM -0500, Benjamin LaHaise wrote:
> > On Sat, Dec 29, 2001 at 11:37:49AM -0800, Larry McVoy wrote:
> > > If you have N people trying to patch the same file, you'll require N
> > > releases and some poor shlep is going to have to resubmit their patch
> > > N-1 times before it gets in.
> >
> > Wrong. Most patches are independant, and even touch different functions.

>
> Really? And the data which shows this absolute statement to be true is
> where? I'm happy to believe data, but there is no data here.

Ben's right. Most patches are independant because the work divides itself up
that way, because people talk about this stuff (on IRC) and cooperate, and
because the tree structure evolves to support the natural divisions ;)

--
Daniel

2001-12-31 08:48:36

by Daniel Phillips

[permalink] [raw]
Subject: Re: The direction linux is taking

On December 30, 2001 12:01 am, Alan Cox wrote:
> > It never shows up in the maintainer's inbox, leaving them more time to
> > address the remainder. And fewer of the increasingly bitter complaints of
> > dropped patches.
>
> Most mangled diffs I get are caused by pine.

Kmail has finally gotten to the point where it doesn't mangle patches, so
long as you remember to shut off the line wrap. And yes, it runs pretty
well under xfce[1] ;-)

--
Daniel

[1] The attachment right-click menu fails to come up unless KDE itself has
been started at least once since power up :-/

2002-01-01 01:32:47

by Horst von Brand

[permalink] [raw]
Subject: Re: The direction linux is taking

Larry McVoy <[email protected]> said:

[...]

> We have lots of commercial customers using BK on the Linux kernel, they
> are doing embedded this and that. The rate of change that they make is
> much greater than the rate of change made in the Linus maintained tree.
> I'm not saying it's good or bad, it's just different. I can say that
> merging is a huge issue in commercial shops. It's interesting to hear
> that it is not in Linux.

Hummm... I guess this is because yoiu see new (development, pre, ...)
kernels each few days. Alan said he gets mosly line shifts (==
non-overlapping patches, or "people should be talking to each other").
Maybe due to the rapid version turnover? Maybe most of the merging is being
done by the posters themselves during development, as Ye Kernel Gods refuse
to do it for them?

I'm surprised that commercial shops see much merging. I'd assume they have
direct access to the up-to-the-minute source, so _less_ merging should be
necesary. Or they are (over)confident in their tools, and just work on
stale sources?

> Some sociology guy with a CS background should do a study on this and
> explore the differences. Is fast change better? Is slow change better?

I think the difference lies elsewhere.
--
Horst von Brand [email protected]
Casilla 9G, Vin~a del Mar, Chile +56 32 672616

2002-01-01 01:46:57

by Dave Jones

[permalink] [raw]
Subject: Re: The direction linux is taking

On Mon, 31 Dec 2001, Horst von Brand wrote:

> Hummm... I guess this is because yoiu see new (development, pre, ...)
> kernels each few days. Alan said he gets mosly line shifts (==
> non-overlapping patches, or "people should be talking to each other").
> Maybe due to the rapid version turnover? Maybe most of the merging is being
> done by the posters themselves during development, as Ye Kernel Gods refuse
> to do it for them?

In -dj, I've found a few things that I've merged haven't applied cleanly,
and in retrospect I'm glad. Because by looking at why they didn't apply,
and by fixing things up by hand, I've gained better understanding over
what I'm actually applying. This certainly makes it easier with merging
with Linus when I can explain to him what a certain change does rather
than send him a resync patch with "Some VM voodoo or other"

I've learned quite a bit in the last few weeks in areas of the kernel
I hadn't thought of looking at before now. It's certainly a
mind-broadening thing to look after something like this.

Dave.

--
| Dave Jones. http://www.codemonkey.org.uk
| SuSE Labs

2002-01-01 05:26:09

by Rob Landley

[permalink] [raw]
Subject: Re: The direction linux is taking

On Monday 31 December 2001 08:32 pm, Horst von Brand wrote:

> Hummm... I guess this is because yoiu see new (development, pre, ...)
> kernels each few days. Alan said he gets mosly line shifts (==
> non-overlapping patches, or "people should be talking to each other").
> Maybe due to the rapid version turnover? Maybe most of the merging is being
> done by the posters themselves during development, as Ye Kernel Gods refuse
> to do it for them?

Ye kernel gods can spend an afternoon reviewing 15 patches that apply cleanly
to see if their changes are a good idea, or they can spend that afternoon
cleaning up and merging one.

Cleaning up and marging is something the patch's author can do, or thousands
of other random developers on the list. Exerting editorial judgement on what
is and isn't a good idea is NOT something everybody else can do.

Which is a better use of the maintainer's time?

> I'm surprised that commercial shops see much merging. I'd assume they have
> direct access to the up-to-the-minute source, so _less_ merging should be
> necesary. Or they are (over)confident in their tools, and just work on
> stale sources?

Commercial shops believe that developers are interchangeable, and that
developers should know what it is they're going to do before they do it. So
if you have three pending changes in this area of the code, you assign them
to three developers in paralell as a matter of course. (Then there's code
reviews with people who disagree with your choice of variable and function
names EVERY TIME... But we won't go there.)

A medium-small change in a commercial shop can easily take two weeks, and you
have to check the code out to do it. (This locks the code you've changed so
nobody else can change it until you check the changed files back in or
somebody with administrator access breaks the lock.) So either:

You check out the files you're going to change and keep them checked out for
a week or two while you design, implement, test, code review, make the
changes the code reviewer wants (it's like getting your car inspected, they
always find something you need even if it's just busy-work), test again, yell
at the code reviewer for making a stupid suggestion that shows a profound
lack of understanding of the code, have a meeting to resolve it, lose, have
to write a workaround in a completely unrelated section of the code, wait for
the guy in the testing lab to get back from vacation...

Ahem. Either you keep the files checked out all that time, or you check them
out at the last minute (tracking down whoever has the last two files you need
checked out and getting them to check them back in), merge your changes with
what's there (often a good day's worth of work right there), and check it
back in. Except when you have to get your merge code reviewed, and then a
manager to sign off on the fact it's BEEN code reviewed, and then it goes to
test again...

Yes, merging is a big deal in Fortune 500 shops. Read the mythical
man-month... :)

> > Some sociology guy with a CS background should do a study on this and
> > explore the differences. Is fast change better? Is slow change better?
>
> I think the difference lies elsewhere.

Between the mythical man-month, the cathedral and the bazaar, and the
innovator's dilemma, I think the study's already been done. But since no
graduate student's taken credit for a thesis or dissertation on it yet, I'm
sure there's still an opportunity to take credit in the ivory towers of
academia...

Rob

2002-01-01 05:35:09

by Rob Landley

[permalink] [raw]
Subject: Re: The direction linux is taking

On Monday 31 December 2001 03:45 am, Daniel Phillips wrote:
> On December 29, 2001 11:04 pm, Larry McVoy wrote:
> > On Sat, Dec 29, 2001 at 04:03:34PM -0500, Benjamin LaHaise wrote:
> > > On Sat, Dec 29, 2001 at 11:37:49AM -0800, Larry McVoy wrote:
> > > > If you have N people trying to patch the same file, you'll require N
> > > > releases and some poor shlep is going to have to resubmit their patch
> > > > N-1 times before it gets in.
> > >
> > > Wrong. Most patches are independant, and even touch different
> > > functions.
> >
> > Really? And the data which shows this absolute statement to be true is
> > where? I'm happy to believe data, but there is no data here.
>
> Ben's right. Most patches are independant because the work divides itself
> up that way, because people talk about this stuff (on IRC) and cooperate,
> and because the tree structure evolves to support the natural divisions ;)

In a fan club, saying "andrea's the MM guy, talk to him" is only natural.
It's a meritocracy, he's alpha geek on call in that area right now.

In a fortune 500 bureaucracy, people are largely supposed to be
interchangeable cogs. People's worth is measured in dollars, and somebody
worth $70k a year should be swappable with somebody else worth $70k/year.
(It's a bit more complex than that, there's certifications and experience,
but somebody with a BA and 2 years experience working on inflatable widgets
should be exchangeable with somebody else with a BA and 2 years experience
working on inflatable widgets. If not, they'll "get up to speed", it's just
a question of acquiring experience...)

So having a single point of failure in the development process... It's
unthinkable. What if that guy decides to retire? What if he gets hit by a
bus. What if the competition hires him away? What if he DEMANDS MORE MONEY?
(It's all about money in a corporation. It's all numbers. The bottom line.
So if the whole project depends on one guy, logically he'll ask for as much
salary as the project's worth. That's a lot of how management thinks.)

So if you DO have someone breaking down the project into subsections, it's
unlikely to be a developer, it would be a manager assigning areas of
responsibility. And shuffling them around from time to time so nobody gets
the idea they can't be replaced. But it's easiest just to scatter
tasks over the group and keep things mixed up all the time...

Fan clubs are all individuals. Bureaucracies try to eliminate the
individual: the automated assembly line with no humans in it is the
bureaucratic ideal...

Totally different paradigm.

Rob

2002-01-02 10:15:05

by Daniel Phillips

[permalink] [raw]
Subject: Re: The direction linux is taking

On December 31, 2001 10:33 pm, Rob Landley wrote:
> On Monday 31 December 2001 03:45 am, Daniel Phillips wrote:
> > On December 29, 2001 11:04 pm, Larry McVoy wrote:
> > > On Sat, Dec 29, 2001 at 04:03:34PM -0500, Benjamin LaHaise wrote:
> > > > On Sat, Dec 29, 2001 at 11:37:49AM -0800, Larry McVoy wrote:
> > > > > If you have N people trying to patch the same file, you'll require N
> > > > > releases and some poor shlep is going to have to resubmit their patch
> > > > > N-1 times before it gets in.
> > > >
> > > > Wrong. Most patches are independant, and even touch different
> > > > functions.
> > >
> > > Really? And the data which shows this absolute statement to be true is
> > > where? I'm happy to believe data, but there is no data here.
> >
> > Ben's right. Most patches are independant because the work divides itself
> > up that way, because people talk about this stuff (on IRC) and cooperate,
> > and because the tree structure evolves to support the natural divisions ;)
>
> In a fan club, saying "andrea's the MM guy, talk to him" is only natural.
> It's a meritocracy, he's alpha geek on call in that area right now.
>
> In a fortune 500 bureaucracy, people are largely supposed to be
> interchangeable cogs. People's worth is measured in dollars, and somebody
> worth $70k a year should be swappable with somebody else worth $70k/year.
> (It's a bit more complex than that, there's certifications and experience,
> but somebody with a BA and 2 years experience working on inflatable widgets
> should be exchangeable with somebody else with a BA and 2 years experience
> working on inflatable widgets. If not, they'll "get up to speed", it's just
> a question of acquiring experience...)
>
> So having a single point of failure in the development process... It's
> unthinkable. What if that guy decides to retire? What if he gets hit by a
> bus. What if the competition hires him away? What if he DEMANDS MORE MONEY?
> (It's all about money in a corporation. It's all numbers. The bottom line.
> So if the whole project depends on one guy, logically he'll ask for as much
> salary as the project's worth. That's a lot of how management thinks.)
>
> So if you DO have someone breaking down the project into subsections, it's
> unlikely to be a developer, it would be a manager assigning areas of
> responsibility. And shuffling them around from time to time so nobody gets
> the idea they can't be replaced. But it's easiest just to scatter
> tasks over the group and keep things mixed up all the time...
>
> Fan clubs are all individuals. Bureaucracies try to eliminate the
> individual: the automated assembly line with no humans in it is the
> bureaucratic ideal...
>
> Totally different paradigm.

Yes, that's all +5 insightful, except... what makes you think any one of the
Linux core hackers is irreplaceable? I know you didn't say that, but you
did say 'single point of failure', and it amounts to the same thing.

--
Daniel

2002-01-02 10:53:31

by NeilBrown

[permalink] [raw]
Subject: Re: The direction linux is taking

On Wednesday January 2, [email protected] wrote:
>
> Yes, that's all +5 insightful, except... what makes you think any one of the
> Linux core hackers is irreplaceable? I know you didn't say that, but you
> did say 'single point of failure', and it amounts to the same thing.
>

I think that the difference is that there is no planning to make sure
that no-one is irreplaceable. Sure people can be replaced, but it
might take a while. A subsystem might be unmaintained (or
under-maintained) for a while until some sucker^Wdeveloper puts their
hand up. That isn't a situation that a "fortune 500 bureaucracy"
would be able to tolerate. But we seem to cope.

NeilBrown

2002-01-02 11:04:51

by Daniel Phillips

[permalink] [raw]
Subject: Re: The direction linux is taking

On January 2, 2002 11:50 am, Neil Brown wrote:
> On Wednesday January 2, [email protected] wrote:
> >
> > Yes, that's all +5 insightful, except... what makes you think any one of the
> > Linux core hackers is irreplaceable? I know you didn't say that, but you
> > did say 'single point of failure', and it amounts to the same thing.
>
> I think that the difference is that there is no planning to make sure
> that no-one is irreplaceable.

Right, it's like the difference between a planned economy and a capitalist
one. People find their own niches in the Linux heirarchy. Except for a few
'official' maintainer positions there is nobody doing any assigning. Surely
there is some irony in this.

> Sure people can be replaced, but it
> might take a while. A subsystem might be unmaintained (or
> under-maintained) for a while until some sucker^Wdeveloper puts their
> hand up. That isn't a situation that a "fortune 500 bureaucracy"
> would be able to tolerate. But we seem to cope.

Yep, we're just lucky ;)

--
Daniel

2002-01-02 15:04:56

by Geert Uytterhoeven

[permalink] [raw]
Subject: Re: The direction linux is taking

On Sat, 29 Dec 2001, Alan Cox wrote:
> The big exception is Configure.help which is a nightmare for patch, and the
> one file I basically always did hand merges on

Perhaps it would help if the entries in Configure.help were sorted?

It's very difficult to merge anything in that file, since in many cases the
`new' entries added by the new patch, already existed in our local tree
(speaking about m68k). Someone just wrote new explanations, and inserted them
someplace else in the file.

Gr{oetje,eeting}s,

Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [email protected]

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
-- Linus Torvalds

2002-01-02 15:07:06

by Geert Uytterhoeven

[permalink] [raw]
Subject: Re: The direction linux is taking

On Sat, 29 Dec 2001, Dave Jones wrote:
> On Sat, 29 Dec 2001, Larry McVoy wrote:
> > Anyway, I'm interested to see if there are screams of "all I ever do is
> > merge and I hate it" or "merging? what's that?".
>
> I've only been keeping this tree since the beginning of the month,
> so I'm still trying to find my feet a little, but so far merging is
> pretty straightforward and usually painless.
>
> The procedure when Linus/Marcelo release a new patch usually goes..
>
> 1. edit the patch to remove any bits that don't make sense
> (eg, I have newer/better version in my tree)
> 2. cat ../patch-2.5.x | patch -p1 --dry-run
> 3. edit the patch to remove already present hunks.
> 4. manually fix up rejects in my tree, and remove reject hunk
> from the diff.
> 5. back to (1) until no rejects.
> 6. cat ../patch-2.5.x | patch -p1
> 7. testing..
> 8. Create new diff, and give it a quick readthrough.
>
> Out of all this the initial patch review (step #1) and the final
> lookover are by far the most time consuming, and I don't think any
> automated tool could speed this up and give me the same level of
> understanding over what I'm merging.

When I was bringing the m68k tree back in sync near the end of 1999, I used the
following approach:
- keep all trees, use `cp -rl' and `patch' when a new version is released
(cfr. Al Viro)
- use `same' to prevent the need for zillions of disk space, and to make the
creation of diffs between such trees fast
- merge trees using my home-brew `mergetree' perl script (basically a
recursive `merge' command), which can replace the destination file by a
hard link if it's the same as one of the originals

Despite having a real CVS repository for m68k now, I still use mergetree...

Gr{oetje,eeting}s,

Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [email protected]

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
-- Linus Torvalds

2002-01-07 05:27:12

by Eyal Sohya

[permalink] [raw]
Subject: Re: The direction linux is taking


Why does'nt linus keep his sources in a cvs tree
so the rest of the folks can read it from there ?

he can still have the write access to the tree
exclusively.

We can keep patches in some kind of database as well
so they dont get lost, linus can mark them rejected/applied
e.t.c e.t.c.

Just thinking aloud.

_________________________________________________________________
Chat with friends online, try MSN Messenger: http://messenger.msn.com