Hi, I'd like to ask you a few questions:
* Do you like the way linux distributions integrate the kernel?
* Wouldn't you prefer they ship with the stable and still maintained
2.6.16.X, while providing optionally the latest kernel for those who
want or just have a new hardware?
* Do you think the megafreeze development model [1] and the "I don't
trust in upstream" development model are broken? (And why)
[1] http://www.modeemi.fi/~tuomov/b/archives/2007/03/03/T19_15_26/
(I'm going to ask this for several projects, not only the kernel)
On Wed, 07 Nov 2007 23:56:57 +0100
ciol <[email protected]> wrote:
> * Wouldn't you prefer they ...
http://en.wikipedia.org/wiki/Push_poll
--
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it." - Brian W. Kernighan
Rik van Riel wrote:
> On Wed, 07 Nov 2007 23:56:57 +0100
> ciol <[email protected]> wrote:
>
>> * Wouldn't you prefer they ...
>
> http://en.wikipedia.org/wiki/Push_poll
>
http://en.wikipedia.org/wiki/Paranoia
On Wed, Nov 07, 2007 at 11:56:57PM +0100, ciol wrote:
> Hi, I'd like to ask you a few questions:
>
> * Do you like the way linux distributions integrate the kernel?
>
> * Wouldn't you prefer they ship with the stable and still maintained
> 2.6.16.X, while providing optionally the latest kernel for those who want
> or just have a new hardware?
No.
With 2.6.16 "new hardware" roughly equals to "sold during the
last 2-3 years", so most users would be forced to use this "option".
"providing optionally the latest kernel" would be a horror to support
for a distribution.
>From all I hear all big distributions spend 3-6 months of QA work
between pushing a kernel into the development branch of their
distribution and putting it into a release.
They can't do this work for 4-6 different upstream kernels each year.
And if they'd omit it, their custumers would both blame them for
shipping such a buggy distribution and swamp their support with bug
reports.
> * Do you think the megafreeze development model [1] and the "I don't trust
> in upstream" development model are broken? (And why)
>...
Definitely not.
If your "stable base system" contains the kernel you lose the hardware
support for recent hardware.
What should be more important for users than having their hardware
supported?
And although it's off-topic for linux-kernel, your suggested
"well-maintained additional package collections" also sound horrific:
As an example, consider the following:
- a new version of GNOME might require a new version of GTK+
- recently GTK+ 2.12 entered Debian testing, and this new version
exposed a serious bug in the xfwm4 package that was at that time
in testing
There are at least two obvious problems with what you propose:
- for avoiding breakages for users a huge amount of coordination
work between the "additional package collections" would be required
- most users want their software to work correctly, not crash, etc.
when a distribution has a 2-3 months freeze before a release that's
not lost time, that's time where _all_ software that will be shipped
gets tested and bugs fixed
There's one important thing you must have in mind:
Geeks (like you and me) can get the latest software versions from the
development versions of their distribution, but for most users - for
whom a computer is a tool that should simply work (no matter whether
it's a server or a desktop) and not a toy - the QA work done during a
freeze has a _huge_ value.
Fedora, openSUSE and Ubuntu all offer new releases every 6 months, which
results in the software in the latest release always being less than
1 year old plus the user getting the QA work and the resulting stability
of a freeze. This seems to be a good solution for desktop user.
cu
Adrian (2.6.16 maintainer)
--
"Is there not promise of rain?" Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
"Only a promise," Lao Er said.
Pearl S. Buck - Dragon Seed
On Wed, 07 Nov 2007 23:56:57 +0100
ciol <[email protected]> wrote:
> Hi, I'd like to ask you a few questions:
>
> * Do you like the way linux distributions integrate the kernel?
>
> * Wouldn't you prefer they ship with the stable and still maintained
> 2.6.16.X, while providing optionally the latest kernel for those who
> want or just have a new hardware?
>
> * Do you think the megafreeze development model [1] and the "I don't
> trust in upstream" development model are broken? (And why)
>
>
>
> [1] http://www.modeemi.fi/~tuomov/b/archives/2007/03/03/T19_15_26/
>
>
> (I'm going to ask this for several projects, not only the kernel)
>
It's a free world, do what you want.
--
Stephen Hemminger <[email protected]>
ciol wrote:
> * Do you think the megafreeze development model [1] and the "I don't
> trust in upstream" development model are broken? (And why)
I'm new to LKML, and because this is "my first release" I've held off
saying that the development model scares me. No doubt I need to see it
through a couple of releases, minimum. If I was less sanguine I might
say something about how often "this is tested and ready to be pushed
upstream" presages "the system is broken and doesn't work." I think if
most CIOs monitored the list they'd never let Linux anywhere near their
business systems. It's scary!
ciol wrote:
> Hi, I'd like to ask you a few questions:
>
> * Do you like the way linux distributions integrate the kernel?
>
> * Wouldn't you prefer they ship with the stable and still maintained
> 2.6.16.X, while providing optionally the latest kernel for those who
> want or just have a new hardware?
>
> * Do you think the megafreeze development model [1] and the "I don't
> trust in upstream" development model are broken? (And why)
Why are you asking the developers? We do this for the sake of the users.
-- Chris
Chris Snook wrote:
> Why are you asking the developers? We do this for the sake of the users.
The kernel is the software of the developers.
It's important to know how they want it to be distributed.
Adrian Bunk wrote:
[...]
Your reasoning makes sense.
But it may be not adapted for applications like apache.
Thanks.
ciol wrote:
> Chris Snook wrote:
>
>> Why are you asking the developers? We do this for the sake of the users.
>
>
> The kernel is the software of the developers.
The kernel is a technology. A distribution is a product. When decisions about
technology and decisions about products are made *entirely* by the same people,
the result is never good.
> It's important to know how they want it to be distributed.
For commercial distributions, the answer is: "In whichever way results in the
largest paycheck with the least amount of stress and effort", which means doing
it the way that's best for the customer.
Non-commercial distributions have less of this pressure, but the same principle
applies if they care about their users. If you're not interested in the users
but you are interested in the technology, you should be doing your work
upstream, so the distribution is irrelevant.
Don't get me wrong, I think stable kernel trees like 2.6.16 are a good thing.
They serve very well a whole bunch of different niches where users are willing
to sacrifice the support benefits of a distribution kernel for the control of an
upstream kernel, while maintaining the stability of their installed base. These
users have little interest in the general-purpose distribution kernel anyway,
aside from perhaps wishing it included some config or patch that its maintainers
have elected not to include.
-- Chris
ciol <[email protected]> writes:
> Hi, I'd like to ask you a few questions:
>
> * Do you like the way linux distributions integrate the kernel?
>
> * Wouldn't you prefer they ship with the stable and still maintained 2.6.16.X,
> while providing optionally the latest kernel for those who want or just have a
> new hardware?
>
> * Do you think the megafreeze development model [1] and the "I don't trust in
> upstream" development model are broken? (And why)
I think a megafreeze development model is sane. Finding a collection
of software versions that are all known to work together is very
interesting, and useful. Making it so you can deliver something that
just works to end users is always interesting.
The only thing you miss out on are new features like the latest hardware
support.
I think forking packages just so you can claim they are frozen is a
dubious practice. Especially if features are added in those forks.
(Yes new hardware support is a feature).
Eric
On 2007-11-12, Eric W. Biederman <[email protected]> wrote:
> I think a megafreeze development model is sane. Finding a collection
> of software versions that are all known to work together is very
> interesting, and useful. Making it so you can deliver something that
> just works to end users is always interesting.
The distros only do that for the most important and most popular
packages, most of which have become rather "generic" and faceless
behemots in the sense that they do not have definite authors and so
on, and for which it takes years to respond to bug reports in any case
(if someone even bothers to enter the bug in registration-required
Suckzilla, Debian's reportbug becoming much more usable in this case,
even though it typically takes another year for the package maintainer
to report things back upstream, if it ever even happens).
Other more marginal software with a face, the distros just throw in
and expect the author to deal with users having problems with ancient
development snapshots and even bugs in stable versions that the distros
simply refuse to fix. They should not distribute that kind of software
at all. That is, distros should stick to providing stable base systems,
and fully supported (and renamed if not generic) customised versions of
other software for their target audience. For the rest, there should
be better mechanisms for authors to distribute binary or otherwise
easily and reliably installable packages of their software.
Closed-source operating systems are more decentralised than Linux,
where the par^W^W a few big distros have de facto central control
over the software that users can conveniently install.
--
Tuomo
On Mon, Nov 12, 2007 at 01:51:25PM +0000, Tuomo Valkonen wrote:
> On 2007-11-12, Eric W. Biederman <[email protected]> wrote:
> > I think a megafreeze development model is sane. Finding a collection
> > of software versions that are all known to work together is very
> > interesting, and useful. Making it so you can deliver something that
> > just works to end users is always interesting.
>
> The distros only do that for the most important and most popular
> packages, most of which have become rather "generic" and faceless
> behemots in the sense that they do not have definite authors and so
> on, and for which it takes years to respond to bug reports in any case
> (if someone even bothers to enter the bug in registration-required
> Suckzilla, Debian's reportbug becoming much more usable in this case,
> even though it typically takes another year for the package maintainer
> to report things back upstream, if it ever even happens).
>
> Other more marginal software with a face, the distros just throw in
> and expect the author to deal with users having problems with ancient
> development snapshots and even bugs in stable versions that the distros
> simply refuse to fix. They should not distribute that kind of software
> at all. That is, distros should stick to providing stable base systems,
> and fully supported (and renamed if not generic) customised versions of
> other software for their target audience. For the rest, there should
> be better mechanisms for authors to distribute binary or otherwise
> easily and reliably installable packages of their software.
The problem is not what the distributions ship, the problem is simply
that problems with distribution packaged software should be reported
to the distribution, not upstream.
And for becoming at least marginally on-topic again:
Assuming your "stable base systems" contains the Linux kernel, how would
you prevent users from reporting bugs in their ancient kernels [1] here?
> Closed-source operating systems are more decentralised than Linux,
> where the par^W^W a few big distros have de facto central control
> over the software that users can conveniently install.
You should rephrase it:
Closed-source operating systems offer less software both available for
convenient installation and supported by the vendor of the operating
system.
Noone forces any users to install the software their distribution
supports - people can (and sometimes do) install other software or
other versions of some software when they need it.
But the good thing about open source software is that when you believe
your ideas are better than what current distributions do you can
implement your ideas and create your own distribution. Then time will
tell whether you were right or wrong.
> Tuomo
cu
Adrian
[1] keep in mind that when using a 6 months old kernel, this kernel
differs by more than one million lines of code (sic) from the
current kernel
--
"Is there not promise of rain?" Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
"Only a promise," Lao Er said.
Pearl S. Buck - Dragon Seed
On 2007-11-12 16:20 +0100, Adrian Bunk wrote:
> The problem is not what the distributions ship, the problem is simply
> that problems with distribution packaged software should be reported
> to the distribution, not upstream.
>
> And for becoming at least marginally on-topic again:
> Assuming your "stable base systems" contains the Linux kernel, how would
> you prevent users from reporting bugs in their ancient kernels [1] here?
You obviously can't prevent them from reporting problems, but you can
dissuade them from doing that. The kernel (and "Linux" in general)
being rather "generic" and faceless in the sense I mentioned in the
previous post does work to that end, and users are more aware that
"it's the Debian kernel", than it's, say, "a Debian-corrupted
ancient development snapshot of Ion". The Debian kernel packages
are not even called 'Linux', just 'kernel', unlike the Ion packages
(in "non-free" these days).
It is also of importance that the so-called community around such faceless
generic projects is much bigger than marginal software, where the author
is likely to be the only one being able to help, and the typical contact
address. Do people bother Linus specifically about the kernel? Do they
actually expect Linus himself to bother with their worries? Typically, no.
Not so with more marginal software with a face and a definite author.
The distro's kernel is also obviously much better tested and supported
than random marginal software they throw in without regard for the
upstream state of development -- distros don't care about upstream,
they just them as workhorses. The average luser doesn't even need to
know what kernel or other generic software the distro is running that
it installs by default, and many of the problems arise in the distro
installation phase. But when they hear of something new and fancy like
Ion, they just go ahead and install it from the distro, because there's
no other convenient way and they're used to it, and get broken, and
unsupported ancient crap (corrupted to use AA/XML-fascist Xft/fontconfig
that the author will have nothing to do with [1]) without being aware of
this, and then come bothering the author, the software having a face.
> You should rephrase it:
> Closed-source operating systems offer less software both available for
> convenient installation and supported by the vendor of the operating
> system.
That's utter and total bullshit. Distros don't provide proper support
for the marginal software they throw in to be able to brag with a huge
(and mostly worthless) package collection. It is very laboursome to
install original author software rather than distro (Party) software,
unlike in closed-source operating systems, where the OS developer
only provides a rather stable base on which to install third party
software, not a broken megafrozen snapshot of Everything.
> Noone forces any users to install the software their distribution
> supports - people can (and sometimes do) install other software or
> other versions of some software when they need it.
It's not that they can't, just that they often won't, because it's
so laboursome. The "freedom" in free software is merely theoretical,
not practical. (See below too.)
> But the good thing about open source software is that when you believe
> your ideas are better than what current distributions do you can
> implement your ideas and create your own distribution.
Haha, the typical FOSS advocate's fallacy. Quote:
“You have the binary, you can crack it.” Does that sound familiar? No? How
about? “It's free software, you can fix or implement what you want.” These
two statements are fundamentally the same: they expect that you have the
time and skill to modify the software to your needs. That it is easier when
the source is out in the open – and it doesn't even have to be “free” or
“open source” – is just a detail. Nevertheless, the uncritical free software
and open source advocates often resort to this argument when their software
is found flawed. It is true, the herd of the bazaar indeed has the power to
modify software to its liking – to the shoddy least common denominator
product that herd desires are for. It is even possible for the unique one to
set up a shop within the bazaar, providing minor improvements to a few of
the bazaar's shoddy products. But to build a cathedral providing treatments
to all the ills of the bazaar – that demands more effort than the herd can
appreciate. There is no practical choice but to use the shoddy products of
the bazaar. In the present state of affairs, for those not of the herd, the
only choice – the only practical freedom – in free software, is the choice
not to use it.
[1]: http://iki.fi/tuomov/b/archives/2006/03/17/T20_15_31/
--
Tuomo
Adrian Bunk wrote:
> On Mon, Nov 12, 2007 at 01:51:25PM +0000, Tuomo Valkonen wrote:
>
>> On 2007-11-12, Eric W. Biederman <[email protected]> wrote:
>>
>>> I think a megafreeze development model is sane. Finding a collection
>>> of software versions that are all known to work together is very
>>> interesting, and useful. Making it so you can deliver something that
>>> just works to end users is always interesting.
>>>
>> The distros only do that for the most important and most popular
>> packages, most of which have become rather "generic" and faceless
>> behemots in the sense that they do not have definite authors and so
>> on, and for which it takes years to respond to bug reports in any case
>> (if someone even bothers to enter the bug in registration-required
>> Suckzilla, Debian's reportbug becoming much more usable in this case,
>> even though it typically takes another year for the package maintainer
>> to report things back upstream, if it ever even happens).
>>
>> Other more marginal software with a face, the distros just throw in
>> and expect the author to deal with users having problems with ancient
>> development snapshots and even bugs in stable versions that the distros
>> simply refuse to fix. They should not distribute that kind of software
>> at all. That is, distros should stick to providing stable base systems,
>> and fully supported (and renamed if not generic) customised versions of
>> other software for their target audience. For the rest, there should
>> be better mechanisms for authors to distribute binary or otherwise
>> easily and reliably installable packages of their software.
>>
>
> The problem is not what the distributions ship, the problem is simply
> that problems with distribution packaged software should be reported
> to the distribution, not upstream.
>
> And for becoming at least marginally on-topic again:
> Assuming your "stable base systems" contains the Linux kernel, how would
> you prevent users from reporting bugs in their ancient kernels [1] here?
>
>
Isn't the kernel easier to sync with latest and greatest?
The core libc and supporting libraries is the core. and the toolchain
the core dev. Those can be updated twice or even once a year. The kernel
can be updated once a month if you like.
I stopped using debian myself and used DIY linux based toolchain and
libc. Thats the stable core that i have been using for 4 months. If
debian can reduce the footprint of the "stable core" and do monthly
releases of package bundles i will use it again.
--
Democracy is about two wolves and a sheep deciding what to eat for dinner.
On Mon, Nov 12, 2007 at 06:02:54PM +0200, Tuomo Valkonen wrote:
> On 2007-11-12 16:20 +0100, Adrian Bunk wrote:
> > The problem is not what the distributions ship, the problem is simply
> > that problems with distribution packaged software should be reported
> > to the distribution, not upstream.
> >
> > And for becoming at least marginally on-topic again:
> > Assuming your "stable base systems" contains the Linux kernel, how would
> > you prevent users from reporting bugs in their ancient kernels [1] here?
>
> You obviously can't prevent them from reporting problems, but you can
> dissuade them from doing that.
Yes, by asking immediately
Is this issue still present with $latest_upstream_version?
If you don't do this for Ion then that's your fault.
> The kernel (and "Linux" in general)
> being rather "generic" and faceless in the sense I mentioned in the
> previous post does work to that end, and users are more aware that
> "it's the Debian kernel", than it's, say, "a Debian-corrupted
> ancient development snapshot of Ion". The Debian kernel packages
> are not even called 'Linux', just 'kernel',
>...
Let's check your statement against reality:
They are called linux-*
>...
> > But the good thing about open source software is that when you believe
> > your ideas are better than what current distributions do you can
> > implement your ideas and create your own distribution.
>
> Haha, the typical FOSS advocate's fallacy. Quote:
>...
Open source development does not work by whining and trying to spread
your opinion through fake "polls" - either you implement what you want
yourself or it won't happen.
That's the simple truth, and it's your time that you waste when you
continue to complain. It also took me a few years until I learned this
lesson.
But an important observation is that when you start something other
people consider worthwhile there will be other people who participate in
what you do, or who even continue it when you decide to switch to other
projects instead.
> Tuomo
cu
Adrian
--
"Is there not promise of rain?" Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
"Only a promise," Lao Er said.
Pearl S. Buck - Dragon Seed
On Tue, Nov 13, 2007 at 12:13:41AM +0800, Rogelio M. Serrano Jr. wrote:
> Adrian Bunk wrote:
> > On Mon, Nov 12, 2007 at 01:51:25PM +0000, Tuomo Valkonen wrote:
> >
> >> On 2007-11-12, Eric W. Biederman <[email protected]> wrote:
> >>
> >>> I think a megafreeze development model is sane. Finding a collection
> >>> of software versions that are all known to work together is very
> >>> interesting, and useful. Making it so you can deliver something that
> >>> just works to end users is always interesting.
> >>>
> >> The distros only do that for the most important and most popular
> >> packages, most of which have become rather "generic" and faceless
> >> behemots in the sense that they do not have definite authors and so
> >> on, and for which it takes years to respond to bug reports in any case
> >> (if someone even bothers to enter the bug in registration-required
> >> Suckzilla, Debian's reportbug becoming much more usable in this case,
> >> even though it typically takes another year for the package maintainer
> >> to report things back upstream, if it ever even happens).
> >>
> >> Other more marginal software with a face, the distros just throw in
> >> and expect the author to deal with users having problems with ancient
> >> development snapshots and even bugs in stable versions that the distros
> >> simply refuse to fix. They should not distribute that kind of software
> >> at all. That is, distros should stick to providing stable base systems,
> >> and fully supported (and renamed if not generic) customised versions of
> >> other software for their target audience. For the rest, there should
> >> be better mechanisms for authors to distribute binary or otherwise
> >> easily and reliably installable packages of their software.
> >>
> >
> > The problem is not what the distributions ship, the problem is simply
> > that problems with distribution packaged software should be reported
> > to the distribution, not upstream.
> >
> > And for becoming at least marginally on-topic again:
> > Assuming your "stable base systems" contains the Linux kernel, how would
> > you prevent users from reporting bugs in their ancient kernels [1] here?
> >
> Isn't the kernel easier to sync with latest and greatest?
>
> The core libc and supporting libraries is the core. and the toolchain
> the core dev. Those can be updated twice or even once a year. The kernel
> can be updated once a month if you like.
A new release of the Linux kernel has more than half a million lines of
code changed. If you do any estimates based on how many lines of changed
code equal one newly introduced bug you see the problem...
And the difference between an upstream kernel and a distribution kernel
are 3-6 months of testing and bugfixing.
> I stopped using debian myself and used DIY linux based toolchain and
> libc. Thats the stable core that i have been using for 4 months. If
> debian can reduce the footprint of the "stable core" and do monthly
> releases of package bundles i will use it again.
Geeks like you and me want the latest software
(I'm using Debian unstable/testing).
But most users want a Linux installation that simply works - and this
includes all software on the system at all times.
cu
Adrian
--
"Is there not promise of rain?" Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
"Only a promise," Lao Er said.
Pearl S. Buck - Dragon Seed
On 2007-11-12 17:56 +0100, Adrian Bunk wrote:
> Yes, by asking immediately
> Is this issue still present with $latest_upstream_version?
That's still a user complaining about problems fixed ages ago,
and a couple more who never even bothered complaining, just
decided that the software is crap because it doesn't work.
> Let's check your statement against reality:
>
> They are called linux-*
I have various kernel-image packages.
> Open source development does not work by whining and trying to spread
> your opinion through fake "polls" - either you implement what you want
> yourself or it won't happen.
I did not post this "poll".
I'll just complain how shoddy FOSS has become, and fix to Windows.
There's little point in using FOSS these days, when nothing is done
better, and many things worse, than in closed source. I simply don't
have the time to fix all the woeful FOSS crap by myself.
> But an important observation is that when you start something other
> people consider worthwhile there will be other people who participate in
> what you do, or who even continue it when you decide to switch to other
> projects instead.
Typically they write a clone from scratch, because the right
herd-approved language or the right license wasn't originally used,
or so. They expect the author to maintain their shoddy patches that
would be better implemented as separate modules -- taking example
from the monolithic unbuildable kernel [1] -- and when you ask for
quality, they disappear or threaten with a fork. It's very difficult
to get quality contributions. And then there are projects that would
really demand a lot of input before even starting the actual coding
work, but the FOSS herd works on the worse-is-better fallacy that
results in crappy software that it has become too late to fix the
fundamental flaws in, when you notice the it. Essential core software
needs a lot of input before starting the work, instead of a small
(one-man) elite creating it and then the herd uncritically adopting
it based on the "oooh! shiny!" factor, after which it will be very
difficult to fix.
[1]: http://iki.fi/tuomov/b/archives/2007/04/01/T19_09_22/
--
Tuomo
On 2007-11-12, Adrian Bunk <[email protected]> wrote:
> Geeks like you and me want the latest software
> (I'm using Debian unstable/testing).
>
> But most users want a Linux installation that simply works - and this
> includes all software on the system at all times.
I'm not in either category. I want a truly stable (as in "quality", not
as in "static" as the distros use the term) base system that simply
works, but I want to follow certain interesting software more closely.
I also doubt the average luser wants the broken software that distros
include in their megafrozen "stable" collections. Of course, most of
the software that the average luser wants is much better supported
than tested than marginal software, but they do occasionally venture
there, and get crap.
--
Tuomo
On Mon, 12 Nov 2007, Adrian Bunk wrote:
> Date: Mon, 12 Nov 2007 18:14:57 +0100
> From: Adrian Bunk <[email protected]>
> To: Rogelio M. Serrano Jr. <[email protected]>
> Cc: Linux Kernel Mailing List <[email protected]>
> Subject: Re: [poll] Is the megafreeze development model broken?
>
> On Tue, Nov 13, 2007 at 12:13:41AM +0800, Rogelio M. Serrano Jr. wrote:
>> Adrian Bunk wrote:
>>>
>> Isn't the kernel easier to sync with latest and greatest?
>>
>> The core libc and supporting libraries is the core. and the toolchain
>> the core dev. Those can be updated twice or even once a year. The kernel
>> can be updated once a month if you like.
>
> A new release of the Linux kernel has more than half a million lines of
> code changed. If you do any estimates based on how many lines of changed
> code equal one newly introduced bug you see the problem...
>
> And the difference between an upstream kernel and a distribution kernel
> are 3-6 months of testing and bugfixing.
this is very true. the big question is which side of the fork has more
testing and bugfixes? the distro with their paid developers, or the
kernel.org kernel with the input from many different distros and the rest
of the community.
David Lang
On Mon, Nov 12, 2007 at 07:16:26PM +0200, Tuomo Valkonen wrote:
> On 2007-11-12 17:56 +0100, Adrian Bunk wrote:
> > Yes, by asking immediately
> > Is this issue still present with $latest_upstream_version?
>
> That's still a user complaining about problems fixed ages ago,
> and a couple more who never even bothered complaining, just
> decided that the software is crap because it doesn't work.
>
> > Let's check your statement against reality:
> >
> > They are called linux-*
>
> I have various kernel-image packages.
>...
Either they are empty transition packages depending on the linux-*
packages or you are not using Debian stable but Debian oldstable (the
latter would be funny in the context of your complaints...).
> Tuomo
cu
Adrian
--
"Is there not promise of rain?" Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
"Only a promise," Lao Er said.
Pearl S. Buck - Dragon Seed
Tuomo Valkonen wrote:
>
>> But the good thing about open source software is that when you believe
>> your ideas are better than what current distributions do you can
>> implement your ideas and create your own distribution.
>>
>
> Haha, the typical FOSS advocate's fallacy. Quote:
>
> “You have the binary, you can crack it.” Does that sound familiar? No? How
> about? “It's free software, you can fix or implement what you want.” These
>
I pretty much do that nowadays. But i dont expect non programmers to do
likewise. At least i dont need a couple million bucks to preserve my
programming skills.
Its actually good for programmers. In the third world where i live its
makes a world of difference.
The availability of source does not mean that everybody must use source.
I think everybody also agrees that its better then binary only distribution.
> two statements are fundamentally the same: they expect that you have the
> time and skill to modify the software to your needs. That it is easier when
> the source is out in the open – and it doesn't even have to be “free” or
>
I dont understand. You are supposed to go to jail for looking at closed
source, right? And licenses are very expensive. I could not afford them
when i started out but now i would rather spend the money on other
things like FPGA's.
> “open source” – is just a detail. Nevertheless, the uncritical free softwaree to deal with
> and open source advocates often resort to this argument when their software
> is found flawed. It is true, the herd of the bazaar indeed has the power to
> modify software to its liking – to the shoddy least common denominator
> product that herd desires are for. It is even possible for the unique one to
>
really? i work very hard at it. and it seems to get better steadily.
> set up a shop within the bazaar, providing minor improvements to a few of
> the bazaar's shoddy products. But to build a cathedral providing treatments
> to all the ills of the bazaar – that demands more effort than the herd can
> appreciate. There is no practical choice but to use the shoddy products of
> the bazaar. In the present state of affairs, for those not of the herd, the
> only choice – the only practical freedom – in free software, is the choice
> not to use it.
>
> [1]: http://iki.fi/tuomov/b/archives/2006/03/17/T20_15_31/
>
>
You seem to be saying that Open source will never be good enough. And
that the megafreeze problem is a necessary consequence. Thats fine with
me. Its not going to make me stop using open source and linux because
there is a good solution to the megafreeze problem. You might as well
start ignoring Linux and the BSD's since Microsoft is already offering
an alternative. At least you dont have to deal with that problem there.
--
Democracy is about two wolves and a sheep deciding what to eat for dinner.
On 2007-11-12, Adrian Bunk <[email protected]> wrote:
> Either they are empty transition packages depending on the linux-*
> packages or you are not using Debian stable but Debian oldstable (the
> latter would be funny in the context of your complaints...).
Well, I'm using two years old 2.6.7 kernel, because the newer
ones have become utter and total crap. (See the link in the
previous post.) It will likely be my last Linux kernel ever,
that I will use until this system becomes simply too obsolete,
at which point, if not before, I'll switch to either FreeBSD
(quite unlikely), or Windows (XP), unless an unlikely miracle
has happened and FOSS has improved instead of continuing its
rapid downfall, dragging all of good bits of *nix legacy down
with it, retaining the bad bits.
--
Tuomo
On 2007-11-12, Rogelio M. Serrano Jr. <[email protected]> wrote:
> I dont understand. You are supposed to go to jail for looking at closed
> source, right? And licenses are very expensive. I could not afford them
> when i started out but now i would rather spend the money on other
> things like FPGA's.
The complement of "open source" is not closed source, or at least
"source not available". (And I doubt it's even illegal to look at
source you have somehow got.) It includes so-called license-free
or license-less software [1] as well -- something I'm likely to do
with any of my future work, if I release the source at all.
> really? i work very hard at it. and it seems to get better steadily.
It's been a constant downfall ever since I started using Linux in '95
or so. None of the gripes I had then have been fixed (the bloatware
known as the X server is still allowed to hang the system), and many
other things have been turned into crap, largely due to world domination
plans. (See the "idiot box Linux" link in one of the recent posts.)
> You might as well
> start ignoring Linux and the BSD's since Microsoft is already offering
> an alternative. At least you dont have to deal with that problem there.
Indeed, Microsoft is offering the only alternative to the suffocating
monoculturist hegemony promoted by the major FOSS projects. (OS X
is too much like Gnome.)
[1]: http://www.thedjbway.org/license_free.html
--
Tuomo
Adrian Bunk wrote:
>>
>> The core libc and supporting libraries is the core. and the toolchain
>> the core dev. Those can be updated twice or even once a year. The kernel
>> can be updated once a month if you like.
>>
>
> A new release of the Linux kernel has more than half a million lines of
> code changed. If you do any estimates based on how many lines of changed
> code equal one newly introduced bug you see the problem...
>
> And the difference between an upstream kernel and a distribution kernel
> are 3-6 months of testing and bugfixing.
>
>
True. But the libc and toolchain dont need to be as "dynamic".
>> I stopped using debian myself and used DIY linux based toolchain and
>> libc. Thats the stable core that i have been using for 4 months. If
>> debian can reduce the footprint of the "stable core" and do monthly
>> releases of package bundles i will use it again.
>>
>
> Geeks like you and me want the latest software
> (I'm using Debian unstable/testing).
>
> But most users want a Linux installation that simply works - and this
> includes all software on the system at all times.
>
Yeah me too. Sidux and mepis does not do megafreezes. I think whats
needed is to build and test groups of packages that work closely
together and release them frequently as a group.
> cu
> Adrian
>
>
--
Democracy is about two wolves and a sheep deciding what to eat for dinner.
On 12.11.2007 17:18, Tuomo Valkonen wrote:
> On 2007-11-12, Adrian Bunk <[email protected]> wrote:
> > Geeks like you and me want the latest software
> > (I'm using Debian unstable/testing).
> >
> > But most users want a Linux installation that simply works - and this
> > includes all software on the system at all times.
>
> I'm not in either category. I want a truly stable (as in "quality", not
> as in "static" as the distros use the term) base system that simply
> works, but I want to follow certain interesting software more closely.
That's the problem(tm).
Contrary to Closed Source Software all(!) OSS-Software is
interdependent. There is no "Stand-Alone"-Software. There is always at
least "libc". (Scripts depend on a script-interpreter, which in turn
depends at least on libc, so there is nothing(tm) that doesn't depend on
libc)
Where is not uncommon for Closed Source Software to not have external
dependencies or to bring along it's dependent librarys.
e.g. (On Debian sid, *)
"kdebase" has a nice little total dependency-list of only 254 packages.
or
"openoffice.org" just 265 dependent packages.
"gimp" comes with light backage, only 146 packages.
Not to speak about such nice little (meta-)packages like "kde" or
"gnome" that are 649 and 684 packages.
All that interdependency form a nice little (:-) web.
(Sometimes called the dependency hell)
What i want to say is this: You can't say that you only want to update a
specific region of your web, at least not without the risk of ripping
your web here and there.
I'm using Linux since 1995, i can life with it's problems and i like
Linux more that ever. :-)
And despite me always using the latest kernel and Debian SID/unstable
for the past few years, for home & work, i have neven been badly burned.
Not that there weren't a few bruises here and there. There is nothing
like a fried X for breakfast. (Which Debian managed at least 2 or 3
times in the past years)
;-)
*:
apt-rdepends <(meta)packagename> | grep " Depends" | cut -d '(' -f0,1 | perl -pe 's/ +$//' | sort | uniq | wc -l
e.g.
apt-rdepends kde | grep " Depends" | cut -d '(' -f0,1 | sort | perl -pe 's/ +$//' | uniq | wc -l
Bis denn
--
Real Programmers consider "what you see is what you get" to be just as
bad a concept in Text Editors as it is in women. No, the Real Programmer
wants a "you asked for it, you got it" text editor -- complicated,
cryptic, powerful, unforgiving, dangerous.
On 2007-11-13 00:39 +0100, Matthias Schniedermeyer wrote:
> That's the problem(tm).
>
> Contrary to Closed Source Software all(!) OSS-Software is
> interdependent. There is no "Stand-Alone"-Software. There is always at
> least "libc". (Scripts depend on a script-interpreter, which in turn
> depends at least on libc, so there is nothing(tm) that doesn't depend on
> libc)
Closed source software also depends on other software, but the
non-standard dependencies are usually distributed along with the
main program, which are more often big applications than small
combinable utilities, than in FOSS.
In FOSS, OTOH, dependencies are no distributed along with the main
software, and now when programs depend on different versions of
a library, the brain-damaged all-in-one-basket *nix file system
hierarchy results in trouble.
An intermediate is needed between those two extremes: separately
distributed dependencies that do not conflict. If packages lived
in their own directories, and were relocatable, multiple versions
could more easily coexist, and packages specify the versions they
work with (or rather, more abstract cryptographically identified
capabilities that they require). Of course, there are other
potential conflicts besides library versions and their locations,
such as the protocol used by some essential system daemon, but
these are encountered far less often, and could sometimes be
solved by e.g. a package providing a wrapper capability.
Also, what if distributions would simply go in, say, /debian-etch/,
/fedora-core4/, and so on? You could then to a great extent run
multiple simultaneously, just having to choose a few services
from one of them that the others would have to use, and hopefully
work with, as well as a kernel. The problems are not insurmountable:
you can already run another OS in a virtual machine and set DISPLAY
point to your main OS, although it's a bit cumbersome.
--
Tuomo
Tuomo Valkonen wrote:
> Well, I'm using two years old 2.6.7 kernel, because the newer
> ones have become utter and total crap. (See the link in the
> previous post.) It will likely be my last Linux kernel ever,
> that I will use until this system becomes simply too obsolete,
> at which point, if not before, I'll switch to either FreeBSD
> (quite unlikely), or Windows (XP), unless an unlikely miracle
> has happened and FOSS has improved instead of continuing its
> rapid downfall, dragging all of good bits of *nix legacy down
> with it, retaining the bad bits.
>
I accuse you of theatrics. I don't believe you. Nobody who truly
"gets" UNIX (which you clearly do) would choose to switch to Windows.
It's not going to happen; not from you. ;-)
On Mon, 12 Nov 2007 17:53:06 +0000 (UTC)
Tuomo Valkonen <[email protected]> wrote:
> The complement of "open source" is not closed source, or at least
> "source not available". (And I doubt it's even illegal to look at
> source you have somehow got.) It includes so-called license-free
> or license-less software [1] as well -- something I'm likely to do
> with any of my future work, if I release the source at all.
The only problem with djb's scheme is that you cannot mirror the software unless given permission from the author. No, not even unmodified source.
And in some (weird) parts of the world, all rights are reserved by default. Not just distribution rights, also use rights. So you have to spell it out that you allow people to use the software unconditionally. There's no need to disclaim warranty if you don't expect to ever be sued. (anyway, it's not sold, so most warranty rights don't apply)
Disclaimer: IANAL.
> It's been a constant downfall ever since I started using Linux in '95
> or so. None of the gripes I had then have been fixed (the bloatware
> known as the X server is still allowed to hang the system), and many
> other things have been turned into crap, largely due to world domination
> plans. (See the "idiot box Linux" link in one of the recent posts.)
Everybody knows the story of X server development model, which has been very ineffective until recent xorg-x11 developments. It's just been 3 releases (7.0 and 7.1 count as one) since X.org took over. Cut it some slack just yet, please.
Certain commercial X servers are nice and lightweight. Go buy a licence if you hate X.org distribution. Oh, it doesn't have the drivers you need? Tough luck.
Pay your hardware developer and/or the company.
Any system that doesn't have the same user base as Windows or enough money to bribe major hardware players will suffer the same problem.
(Apple can pay some for drivers)
BTW, why don't we see graphic systems other than X in the FOSS market?
All such projects I can recall failed: Y-Windows, Fresco...
Only Apple had enough manpower to implement its own replacement (Quartz), and they were in a better place: they were writing a system from scratch, with almost no concerns about backwards compatibility.
So, in order to start an X replacement, you have to write an operating system around it...
> Indeed, Microsoft is offering the only alternative to the suffocating
> monoculturist hegemony promoted by the major FOSS projects. (OS X
> is too much like Gnome.)
Huh? It is the Microsoft that is the monoculture.
They supply one true GUI with scarce documentation, a package of bundled software and libraries (including C library), one true media system (DirectShow) and one true configuration system - registry. And also an internet browser.
Recently the suite has been expanded with .NET runtime.
Guess what - these mostly (uhm...) work fine.
The about only difference is that people (developers external to Microsoft) bundle software with the dependencies.
I wonder why almost no distributions do that...
You can install multiple versions of the same package... it requires moderate amount of work, nothing a medium-size distribution can't handle. (e.g. some Linux-From-Scratch installation methods and GoboLinux support multiple installations of the same package)
Just some build system patches for certain software.
Oh, you want to get everything with no work on your part? Go ask your vendor... that is, upstream.
BTW, good flame bait.
> [1]: http://www.thedjbway.org/license_free.html
Footnotes have no place in emails. Theye are not books.
On 2007-11-13 13:28 +0100, Radoslaw Szkodzinski wrote:
> The only problem with djb's scheme is that you cannot mirror the software
> unless given permission from the author. No, not even unmodified source.
So? That's why I also call it the "piractic license" and the "apathy
license" -- do what you can get away with without pissing off the
author. In private use you can hardly individually piss of the author,
but powerful distributors can easily do that.
> So, in order to start an X replacement, you have to write an operating
> system around it...
I'm not asking for an X replacement. Despite all of its minor flaws,
the old X (excluding newer extensions such as Xinerama, Xrender, etc.)
in general is much better than anything the FOSS herd has come up with.
I'm just asking that a small mode-switching part of the graphics driver
were in the kernel (or, preferrably, a micro-kernel service), so that
it was possible to restore the system to a usable state after X crashes,
and generally switch virtual console without demanding applications
using graphics mode do that. There's already DRI shit in the kernel,
so this is not unprecedented. And yet, things have not improved a bit
from the SVGAlib times. The desktop herd only cares about glitz, not
stability and quality.
> Huh? It is the Microsoft that is the monoculture.
That the FOSS herd apes with passion.
> They supply one true GUI with scarce documentation, a package of bundled
> software and libraries (including C library), one true media system
> (DirectShow)
They supply all kinds of shit, but it's easier to install third-party
alternatives than on Linux, where The Par^W^Wa few big distributions
have de facto control over easily installable software. The biggest
problem I have with Windows is that applications manage their own
windows, instead of a window manager -- Ion, a shining beacon of
usability all alone in the night, being my last umbilical cord to
FOSS crap. If I could have that in Windows, there would be little
reason at all to use crappy FOSS operating systems. I feat that
the day may come when even that is made practically impossible
by FDO idiocies.
> and one true configuration system - registry.
Better that than the rendy XML crap and cryptic shell scripts (udev).
> And also an internet browser.
And the FOSS herd supplies.. Gnomefox. No thanks, I'll stick to Opera,
which I can have on Windows too.
> Footnotes have no place in emails. Theye are not books.
It was not a footnote, but a link, a citation. Links have no place
inline in emails, messing the flow.
--
Tuomo