Just curious, are there any plans to put Mosix into the standard kernel,
maybe in 2.5, so folks could just configure it and go? it seems that the
number of people with more than one computer might make this a feature many
would at least want to try, especially if it was available as an option by
default. Is there anything in the Mosix folks' implementation that would
prevent this?
--
Zack Brown
> Just curious, are there any plans to put Mosix into the
> standard kernel,
> maybe in 2.5, so folks could just configure it and go? it
> seems that the
> number of people with more than one computer might make this
> a feature many
> would at least want to try, especially if it was available as
> an option by
> default. Is there anything in the Mosix folks' implementation
> that would
> prevent this?
I can't speak for the kernel folks, but has it been ported to
architectures other than i386 yet? Last I heard, it hadn't been,
but that was a very long time ago.
C
On Tue, 27 Feb 2001, Christopher Chimelis wrote:
>
> > Just curious, are there any plans to put Mosix into the standard kernel,
> > maybe in 2.5, so folks could just configure it and go? it seems that the
> > number of people with more than one computer might make this a feature
> > many would at least want to try, especially if it was available as an
> > option by default. Is there anything in the Mosix folks' implementation
> > that would prevent this?
>
> I can't speak for the kernel folks, but has it been ported to
> architectures other than i386 yet? Last I heard, it hadn't been, but that
> was a very long time ago.
At any rate, that doesn't seem like a barrier to entry, and might even
encourage folks to work on porting it to the other architectures.
be well,
Zack
>
> C
>
--
Zack Brown
Zack Brown wrote:
>
> Just curious, are there any plans to put Mosix into the standard kernel,
> maybe in 2.5, so folks could just configure it and go? it seems that the
> number of people with more than one computer might make this a feature many
> would at least want to try, especially if it was available as an option by
> default. Is there anything in the Mosix folks' implementation that would
> prevent this?
I'm not a knowledgeable person, but I've been following Mosix/beowulf/? for
a few years and trying to keep up.
I've thought that it would be good to break up the different clustering
frills -- node identification, process migration, process hosting, distributed
memory, yadda yadda blah, into separate bite-sized portions.
Centralization would be good for standardizing on what /proc/?/?/? you read to
find out what clusters you are in, and whatis your node number there. There
is a lot of theorhetical work to be done.
Until then, I don't expect to see the Complete Mosix Patch Set available
from ftp.kernel.org in its current form, as a monolithic set that does many things,
including its Very Own Distributed File System Architecture.
If any of the work from Mosix will make it Into The Standard Kernel it will be
by backporting and standardization.
Is there a good list to discuss this on? Is this the list? Which pieces of
clustering-scheme patches would be good to have?
I think a good place to start would be node numbering.
The standard node numbering would need to be flexible enough to have one machine
participating in multiple clusters at the same time.
/proc/cluster/.... this would be standard root point for clustering stuff
/proc/mosix would go away, become proc/cluster/mosix
and the same with whatever bproc puts into /proc; that stuff would move to
/proc/cluster/bproc
Or, the status quo will endure, with cluster hackers playing catch-up.
--
David Nicol 816.235.1187 [email protected]
"Americans are a passive lot, content to let so-called
experts run our lives" -- Dr. Science
On Tue, 27 Feb 2001, David L. Nicol wrote:
> /proc/cluster/.... this would be standard root point for clustering stuff
>
> /proc/mosix would go away, become proc/cluster/mosix
>
> and the same with whatever bproc puts into /proc; that stuff would move to
> /proc/cluster/bproc
#include <std_rants/Thou_Shalt_Not_Shite_Into_Procfs>
Guys, if you want a large subtree in /proc - whack yourself over the head
until you realize that you want an fs of your own. I'll be more than
happy to help with both parts.
On Tue, 27 Feb 2001, David L. Nicol wrote:
> I've thought that it would be good to break up the different
> clustering frills -- node identification, process migration,
> process hosting, distributed memory, yadda yadda blah, into
> separate bite-sized portions.
It would also be good to share parts of the infrastructure
between the different clustering architectures ...
> Is there a good list to discuss this on? Is this the list?
> Which pieces of clustering-scheme patches would be good to have?
I know each of the cluster projects have mailing lists, but
I've never heard of a list where the different projects come
together to eg. find out which parts of the infrastructure
they could share, or ...
Since I agree with you that we need such a place, I've just
created a mailing list:
[email protected]
To subscribe to the list, send an email with the text
"subscribe linux-cluster" to:
[email protected]
I hope that we'll be able to split out some infrastructure
stuff from the different cluster projects and we'll be able
to put cluster support into the kernel in such a way that
we won't have to make the choice which of the N+1 cluster
projects should make it into the kernel...
regards,
Rik
--
Linux MM bugzilla: http://linux-mm.org/bugzilla.shtml
Virtual memory is like a game you can't win;
However, without VM there's truly nothing to lose...
http://www.surriel.com/
http://www.conectiva.com/ http://distro.conectiva.com/
Do the Mosix folks have anything to add about possible integration into the
kernel? (should have cced them earlier, but it slipped my mind)
On Tue, 27 Feb 2001, David L. Nicol wrote:
> Zack Brown wrote:
> >
> > Just curious, are there any plans to put Mosix into the standard kernel,
> > maybe in 2.5, so folks could just configure it and go? it seems that the
> > number of people with more than one computer might make this a feature many
> > would at least want to try, especially if it was available as an option by
> > default. Is there anything in the Mosix folks' implementation that would
> > prevent this?
>
> I'm not a knowledgeable person, but I've been following Mosix/beowulf/? for
> a few years and trying to keep up.
>
> I've thought that it would be good to break up the different clustering
> frills -- node identification, process migration, process hosting, distributed
> memory, yadda yadda blah, into separate bite-sized portions.
>
> Centralization would be good for standardizing on what /proc/?/?/? you read to
> find out what clusters you are in, and whatis your node number there. There
> is a lot of theorhetical work to be done.
>
> Until then, I don't expect to see the Complete Mosix Patch Set available
> from ftp.kernel.org in its current form, as a monolithic set that does many things,
> including its Very Own Distributed File System Architecture.
>
> If any of the work from Mosix will make it Into The Standard Kernel it will be
> by backporting and standardization.
>
>
> Is there a good list to discuss this on? Is this the list? Which pieces of
> clustering-scheme patches would be good to have?
>
> I think a good place to start would be node numbering.
>
> The standard node numbering would need to be flexible enough to have one machine
> participating in multiple clusters at the same time.
>
> /proc/cluster/.... this would be standard root point for clustering stuff
>
> /proc/mosix would go away, become proc/cluster/mosix
>
> and the same with whatever bproc puts into /proc; that stuff would move to
> /proc/cluster/bproc
>
>
> Or, the status quo will endure, with cluster hackers playing catch-up.
On Tue, 27 Feb 2001, Alexander Viro wrote:
|
|#include <std_rants/Thou_Shalt_Not_Shite_Into_Procfs>
|
|Guys, if you want a large subtree in /proc - whack yourself over the head
|until you realize that you want an fs of your own. I'll be more than
|happy to help with both parts.
Rik van Riel said:
> I know each of the cluster projects have mailing lists, but
> I've never heard of a list where the different projects come
> together to eg. find out which parts of the infrastructure
> they could share, or ...
>
> Since I agree with you that we need such a place, I've just
> created a mailing list:
>
> [email protected]
>
> To subscribe to the list, send an email with the text
> "subscribe linux-cluster" to:
>
> [email protected]
>
>
> I hope that we'll be able to split out some infrastructure
> stuff from the different cluster projects and we'll be able
> to put cluster support into the kernel in such a way that
> we won't have to make the choice which of the N+1 cluster
> projects should make it into the kernel...
--
Zack Brown
argh. OK
On Tue, Feb 27, 2001 at 01:56:25PM -0800, Zack Brown wrote:
> Do the Mosix folks have anything to add about possible integration into the
> kernel? (should have cced them earlier, but it slipped my mind)
>
> On Tue, 27 Feb 2001, David L. Nicol wrote:
>
> > Zack Brown wrote:
> > >
> > > Just curious, are there any plans to put Mosix into the standard kernel,
> > > maybe in 2.5, so folks could just configure it and go? it seems that the
> > > number of people with more than one computer might make this a feature many
> > > would at least want to try, especially if it was available as an option by
> > > default. Is there anything in the Mosix folks' implementation that would
> > > prevent this?
> >
> > I'm not a knowledgeable person, but I've been following Mosix/beowulf/? for
> > a few years and trying to keep up.
> >
> > I've thought that it would be good to break up the different clustering
> > frills -- node identification, process migration, process hosting, distributed
> > memory, yadda yadda blah, into separate bite-sized portions.
> >
> > Centralization would be good for standardizing on what /proc/?/?/? you read to
> > find out what clusters you are in, and whatis your node number there. There
> > is a lot of theorhetical work to be done.
> >
> > Until then, I don't expect to see the Complete Mosix Patch Set available
> > from ftp.kernel.org in its current form, as a monolithic set that does many things,
> > including its Very Own Distributed File System Architecture.
> >
> > If any of the work from Mosix will make it Into The Standard Kernel it will be
> > by backporting and standardization.
> >
> >
> > Is there a good list to discuss this on? Is this the list? Which pieces of
> > clustering-scheme patches would be good to have?
> >
> > I think a good place to start would be node numbering.
> >
> > The standard node numbering would need to be flexible enough to have one machine
> > participating in multiple clusters at the same time.
> >
> > /proc/cluster/.... this would be standard root point for clustering stuff
> >
> > /proc/mosix would go away, become proc/cluster/mosix
> >
> > and the same with whatever bproc puts into /proc; that stuff would move to
> > /proc/cluster/bproc
> >
> >
> > Or, the status quo will endure, with cluster hackers playing catch-up.
>
> On Tue, 27 Feb 2001, Alexander Viro wrote:
>
> |
> |#include <std_rants/Thou_Shalt_Not_Shite_Into_Procfs>
> |
> |Guys, if you want a large subtree in /proc - whack yourself over the head
> |until you realize that you want an fs of your own. I'll be more than
> |happy to help with both parts.
>
> Rik van Riel said:
>
> > I know each of the cluster projects have mailing lists, but
> > I've never heard of a list where the different projects come
> > together to eg. find out which parts of the infrastructure
> > they could share, or ...
> >
> > Since I agree with you that we need such a place, I've just
> > created a mailing list:
> >
> > [email protected]
> >
> > To subscribe to the list, send an email with the text
> > "subscribe linux-cluster" to:
> >
> > [email protected]
> >
> >
> > I hope that we'll be able to split out some infrastructure
> > stuff from the different cluster projects and we'll be able
> > to put cluster support into the kernel in such a way that
> > we won't have to make the choice which of the N+1 cluster
> > projects should make it into the kernel...
>
>
>
> --
> Zack Brown
>
>
>
>
>
>
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
--
--
Zack Brown
On 02.27 Zack Brown wrote:
> Do the Mosix folks have anything to add about possible integration into the
> kernel? (should have cced them earlier, but it slipped my mind)
>
And also beowulf people, [email protected].
--
J.A. Magallon $> cd pub
mailto:[email protected] $> more beer
Linux werewolf 2.4.2-ac5 #1 SMP Tue Feb 27 01:09:47 CET 2001 i686
There are two parts of MOSIX that deal with file systems.
In MOSIX, every migrated process leaves a proxy at its creation (home)
node that services all system call requests, including IO calls.
What newer versions of MOSIX did is to add the "DFSA" (direct file
system access) layer that allows MOSIX to support executing file
system calls locally for migrated process when they are against a
cache coherent, cluster file system (think GFS). When this was put
in MOSIX, they also did a write through, non-caching file system to
test their DFSA code called MFS.
Both the MOSIX team and the global file system group have been involved
in getting their stuff to play nicely together.
ric
Fellow Beowulfers,
I have yet to hear a compelling argument about why any of them should
go into the standard kernel -- let alone a particular one or a duck of a
compromise.
The Scyld system is based on BProc -- which requires only a 1K patch to
the kernel. This patch adds 339 net lines to the kernel, and changes 38
existing lines.
The Scyld 2-kernel-monte kernel inplace reboot facility is a 600-line
module which doesn't require any patches whatsoever.
Compare this total volume to the thousands of lines of patches that
RedHat or VA add to their kernel RPMS before shipping. I just don't see
the value in fighting about what clustering should 'mean' or picking
winners when it's just not a real problem.
Scyld is shipping a for-real commercial product based on BProc and
2-kernel-Monte and our better-than-stock implementation of LFS and and
we're not losing any sleep over this issue.
I think we should instead focus our collective will on removing things
from the kernel. For years, projects like ALSA, pcmcia-cs, and VMware
have done an outstanding job sans 'inclusion' and we should more
frequently have the courage to do the same. RedHat and other linux vendors
have demonstrated ably that they know how to build and package systems
that draw together these components in an essentially reasonable way.
Regards,
Dan Ridge
Scyld Computing Corporation
On Tue, 27 Feb 2001, Rik van Riel wrote:
> On Tue, 27 Feb 2001, David L. Nicol wrote:
>
> > I've thought that it would be good to break up the different
> > clustering frills -- node identification, process migration,
> > process hosting, distributed memory, yadda yadda blah, into
> > separate bite-sized portions.
>
> It would also be good to share parts of the infrastructure
> between the different clustering architectures ...
>
> > Is there a good list to discuss this on? Is this the list?
> > Which pieces of clustering-scheme patches would be good to have?
>
> I know each of the cluster projects have mailing lists, but
> I've never heard of a list where the different projects come
> together to eg. find out which parts of the infrastructure
> they could share, or ...
>
> Since I agree with you that we need such a place, I've just
> created a mailing list:
>
> [email protected]
>
> To subscribe to the list, send an email with the text
> "subscribe linux-cluster" to:
>
> [email protected]
>
>
> I hope that we'll be able to split out some infrastructure
> stuff from the different cluster projects and we'll be able
> to put cluster support into the kernel in such a way that
> we won't have to make the choice which of the N+1 cluster
> projects should make it into the kernel...
>
> regards,
>
> Rik
> --
> Linux MM bugzilla: http://linux-mm.org/bugzilla.shtml
>
> Virtual memory is like a game you can't win;
> However, without VM there's truly nothing to lose...
>
> http://www.surriel.com/
> http://www.conectiva.com/ http://distro.conectiva.com/
>
>
>
>
> _______________________________________________
> Beowulf mailing list, [email protected]
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
>
Daniel Ridge writes:
> Fellow Beowulfers,
>
> I have yet to hear a compelling argument about why any of them should
> go into the standard kernel -- let alone a particular one or a duck of a
> compromise.
>
> The Scyld system is based on BProc -- which requires only a 1K patch to
> the kernel. This patch adds 339 net lines to the kernel, and changes 38
> existing lines.
Well, that explains your viewpoint and your motivation. :-)
> I think we should instead focus our collective will on removing things
> from the kernel. For years, projects like ALSA, pcmcia-cs, and VMware
ALSA: driver work gets done twice
pcmcia-cs: this was so bad that Linus himself was unable to install
Linux on his new laptop, so now PCMCIA support is in the kernel.
VMware: quite a pain I think
You are basically suggesting the often-rejected "split up the kernel"
idea. I think the linux-kernel FAQ covers this.
> have done an outstanding job sans 'inclusion' and we should more
> frequently have the courage to do the same. RedHat and other linux vendors
> have demonstrated ably that they know how to build and package systems
> that draw together these components in an essentially reasonable way.
So people should only get kernels from linux vendors? This is great
for your business I'd imagine, but one of the nice things about Linux
is that you can replace the kernel without too much trouble.
On Wed, 28 Feb 2001, Albert D. Cahalan wrote:
> Daniel Ridge writes:
> > I think we should instead focus our collective will on removing things
> > from the kernel. For years, projects like ALSA, pcmcia-cs, and VMware
> ALSA: driver work gets done twice
Huh?
-Dan
On Wed, 28 Feb 2001, Albert D. Cahalan wrote:
> > The Scyld system is based on BProc -- which requires only a 1K patch to
> > the kernel. This patch adds 339 net lines to the kernel, and changes 38
> > existing lines.
>
> Well, that explains your viewpoint and your motivation. :-)
I would put this the other way around. I would say that Scyld reflects the
views of its core developers -- not the other way around. Those of us who
worked on the Beowulf project at NASA felt then exactly as we do today.
> ALSA: driver work gets done twice
It's an irrelevant detail if only one works. I've used the kernel's
brand of sound support and it mostly works. I'm usually too lazy
to get ALSA. I only use it when the kernel's support hasn't quite caught
up to ALSA's
> pcmcia-cs: this was so bad that Linus himself was unable to install
> Linux on his new laptop, so now PCMCIA support is in the kernel.
I often have exactly inverted problems with CardBus.
It's true that external packages often start to look like geologic rock
samples -- the sediment record of linux -- and that they become a record
of the things that have changed. Funniest, I think, is when these packages
record interfaces that merely churn without improving.
Naturally, it takes work to created a supportable piece of system software
that can be used on a number of platforms. I would also cite from your
text above: "driver work gets done twice".
> VMware: quite a pain I think
I think not! Installing VMware today is a breeze. SMP or UP? No problem!
Upgrade your kernel? No problem! Outstanding for a commercial third-party
application that happens to be available on two different platforms.
I would also add Myricom's GM package for Myrinet to the list of
reasonable components that are maintained outside of the kernel.
Vendors should be encouraged to provide the kind of commitment to linux
that Myri consistiently demonstrates.
> You are basically suggesting the often-rejected "split up the kernel"
> idea. I think the linux-kernel FAQ covers this.
No. I'm stipulating that I work with a really interesting piece of
software that works with the Linux kernel and is available under the GPL
and which we have never even bothered to 'submit' as a patch. I'm willing
to suggest further that any responsible development community is
suceptible to "race conditions" whereby the natural and studied
development and evaluation process is end-run by attempts to 'win' and
urinate on the kernel or ANSI or POSIX or whoever (with a facility or
mechanism) directly.
These types of 'hacks' or 'denials-of-service' play on the adage that
forgiveness is easier than permission. It's hard to dispute this point
in an age when SourceForge makes it possible for anyone to maintain
a driver or filesystem or whatever without needing to 'hitch a ride'
on an existing project (ala linux) that already has a distribution and
mirroring facility.
> So people should only get kernels from linux vendors? This is great
> for your business I'd imagine, but one of the nice things about Linux
> is that you can replace the kernel without too much trouble.
No. My point was that it is possible to do a reasonable job and that
building a kernel and subsytems so that the result is deliverable and
supportable is very hard -- harder than building a kernel and ALSA
for just yourself -- and that external components can be resonably done
even in the tough vendor environment.
>From my own perspective, I'm far to lazy to build kernels for
myself. I can hack on Beowulf clustering -- including new kernel modules
-- all day long and the build process usually lasts 5s of seconds.
Regards,
Dan Ridge
Scyld Computing Corporation
On Wed, Feb 28, 2001 at 06:06:37PM -0500, Daniel Ridge wrote:
>
> Fellow Beowulfers,
>
> I have yet to hear a compelling argument about why any of them should
> go into the standard kernel -- let alone a particular one or a duck of a
> compromise.
>
> The Scyld system is based on BProc -- which requires only a 1K patch to
> the kernel. This patch adds 339 net lines to the kernel, and changes 38
> existing lines.
>
> The Scyld 2-kernel-monte kernel inplace reboot facility is a 600-line
> module which doesn't require any patches whatsoever.
>
> Compare this total volume to the thousands of lines of patches that
> RedHat or VA add to their kernel RPMS before shipping. I just don't see
> the value in fighting about what clustering should 'mean' or picking
> winners when it's just not a real problem.
Where is the value in a bunch of incompatible alternatives that have to
be laboriously ported to each new kernel version, the patches growing ever
larger as the official sources gradually diverge?
I see your point about not wanting to standardize too quickly, and I agree. But
what is the harm in *talking* about it? Why not find out if some standard kernel
clustering API could be useful, instead of just rejecting the idea out of hand?
Maybe it's my fault as the originator of this thread, for asking if _MOSIX_
would ever be included. I certainly don't think MOSIX should be favored over
other clustering implementations, it just happens to be the one I think about
most often.
I think one compelling argument in favor of including some kind of clustering
API in Linux is to encourage folks to use it and hack on it. Then we can
spawn Wintermute and become blind slaves to its awesome power.
>
> Scyld is shipping a for-real commercial product based on BProc and
> 2-kernel-Monte and our better-than-stock implementation of LFS and and
> we're not losing any sleep over this issue.
>
> I think we should instead focus our collective will on removing things
> from the kernel. For years, projects like ALSA, pcmcia-cs, and VMware
> have done an outstanding job sans 'inclusion' and we should more
> frequently have the courage to do the same. RedHat and other linux vendors
> have demonstrated ably that they know how to build and package systems
> that draw together these components in an essentially reasonable way.
Where do you draw the line? Maybe the developers should take SMP out of the
kernel, so the different vendors can all implement their favorite versions
without any ducks of compromise. (and don't tell me there is only one
universally accepted best way to implement SMP)
Be well,
Zack
>
> Regards,
> Dan Ridge
> Scyld Computing Corporation
>
> On Tue, 27 Feb 2001, Rik van Riel wrote:
>
> > On Tue, 27 Feb 2001, David L. Nicol wrote:
> >
> > > I've thought that it would be good to break up the different
> > > clustering frills -- node identification, process migration,
> > > process hosting, distributed memory, yadda yadda blah, into
> > > separate bite-sized portions.
> >
> > It would also be good to share parts of the infrastructure
> > between the different clustering architectures ...
> >
> > > Is there a good list to discuss this on? Is this the list?
> > > Which pieces of clustering-scheme patches would be good to have?
> >
> > I know each of the cluster projects have mailing lists, but
> > I've never heard of a list where the different projects come
> > together to eg. find out which parts of the infrastructure
> > they could share, or ...
> >
> > Since I agree with you that we need such a place, I've just
> > created a mailing list:
> >
> > [email protected]
> >
> > To subscribe to the list, send an email with the text
> > "subscribe linux-cluster" to:
> >
> > [email protected]
> >
> >
> > I hope that we'll be able to split out some infrastructure
> > stuff from the different cluster projects and we'll be able
> > to put cluster support into the kernel in such a way that
> > we won't have to make the choice which of the N+1 cluster
> > projects should make it into the kernel...
> >
> > regards,
> >
> > Rik
> > --
> > Linux MM bugzilla: http://linux-mm.org/bugzilla.shtml
> >
> > Virtual memory is like a game you can't win;
> > However, without VM there's truly nothing to lose...
> >
> > http://www.surriel.com/
> > http://www.conectiva.com/ http://distro.conectiva.com/
> >
> >
> >
> >
> > _______________________________________________
> > Beowulf mailing list, [email protected]
> > To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
> >
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
--
--
Zack Brown
Daniel Ridge <[email protected]> writes:
[...]
>Compare this total volume to the thousands of lines of patches that
>RedHat or VA add to their kernel RPMS before shipping. I just don't see
[...]
What's good about that? The first thing I do is to rip out the RedHat
kernel and compile and install a pure kernel from ftp.kernel.org.
It's *bad* that those vendors deliver hacked kernels. It's not
something that should be recommended as a *goal*!
When I need a new kernel version I can't sit back and hope with
crossed fingers that RedHat (or whatever vendor) comes out with a
new, hacked version of Linus' latest.
-Tor
On Thu, Mar 01, 2001 at 04:02:11PM +0100, Tor Arntsen wrote:
> Daniel Ridge <[email protected]> writes:
> [...]
> >Compare this total volume to the thousands of lines of patches that
> >RedHat or VA add to their kernel RPMS before shipping. I just don't see
> [...]
>
> What's good about that? The first thing I do is to rip out the RedHat
> kernel and compile and install a pure kernel from ftp.kernel.org.
What's good is the added value that their customers gain. If you don't feel
there is any, then you're probably running the wrong distribution. The nice
thing with Linux is that you are free to go and grab your own kernel if you
wish to do so.
> It's *bad* that those vendors deliver hacked kernels. It's not
> something that should be recommended as a *goal*!
No it isn't. You seem to assume that your requirements match everybody elses.
This is highly unlikely. A particular vendor will have a number of requirements
for their distribution (hopefully driven by customer demand). At the time that
they create a distribution, it's quite possible that no current pure kernel
meets those requirements. It's perfectly reasonable for a vendor to make
changes to meet those requirements, provided they abide by the licensing
demands. For instance, one major change in Red Hat 7.0 is that the kernel
supports USB. At the time, 2.4 wasn't ready and 2.2 didn't have support. What
do you suggest they should have done ?
The source code to the kernel that Red Hat ships is readily available, so I
fail to see why shipping a modified kernel is such a big issue.
> When I need a new kernel version I can't sit back and hope with
> crossed fingers that RedHat (or whatever vendor) comes out with a
> new, hacked version of Linus' latest.
>
Commercial customers rarely "require a new kernel version". They may encounter
problems that require fixing the kernel, but otherwise, they frequently couldn't
care less about the kernel. In such cases, they need some form of support.
There's nothing to suggest that a vendor-modified kernel is inherently harder
to support, provided that it doesn't diverge too radically.
Most people buy computers to run applications, not
an operating system. Those of us who do work on OS's are in the distinct
minority. Given that Linus works on the development kernel, wanting to run
on "Linus' latest" implies you're not using Linux in production or that you're
very brave :-)
Regards,
Tim
--
Tim Wright - [email protected] or [email protected] or [email protected]
IBM Linux Technology Center, Beaverton, Oregon
Interested in Linux scalability ? Look at http://lse.sourceforge.net/
"Nobody ever said I was charming, they said "Rimmer, you're a git!"" RD VI
On Mar 1, 20:13, Tim Wright wrote:
[discussion about commercial customers and kernels snipped]
I don't think we disagree, actually. My point was a different one.
With vendor-patched kernels you can either stick to the hacked kernel
or you can move on to an ftp.kernel.org kernel. For the type of hacks
that RedHat & co are doing it shouldn't matter. If it did, I mean if
they deliver hacked software that only works with a hacked kernel,
*then* I would be angry. That would mean that I would have to track
the patches they did, and apply them myself to new kernels. OR I
would have to sit back and wait for them to come out with an updated
distro, sometime in the future. Not good.
What was brought up in this discussion about clustering patches was
that that kind of model would be a good idea. I disagree. One thing
is to deliver patches because integration is not currently possible, or
practical, or whatever, but to point to the RedHat hacked-kernels-are-us
model as a *goal*, that's what I don't like. It's sometimes a necessity,
but it's not something that should be encouraged.
Cheers,
-Tor
Hi!
> The Scyld system is based on BProc -- which requires only a 1K patch to
> the kernel. This patch adds 339 net lines to the kernel, and changes 38
> existing lines.
>
> The Scyld 2-kernel-monte kernel inplace reboot facility is a 600-line
> module which doesn't require any patches whatsoever.
There might be big difference in *complexity* of those patches. Distributions
only change "unimportant" stuff. And 600 lines module is not small, either..
--
Philips Velo 1: 1"x4"x8", 300gram, 60, 12MB, 40bogomips, linux, mutt,
details at http://atrey.karlin.mff.cuni.cz/~pavel/velo/index.html.