2004-03-06 18:52:00

by Grigor Gatchev

[permalink] [raw]
Subject: Re: A Layered Kernel: Proposal


Here it is. Get your axe ready! :-)

---

Driver Model Types: A Short Description.

(Note: This is NOT a complete description of a layer,
according to the kernel layered model I dared to offer. It
concerns only the hardware drivers in a kernel.)


Direct binding models:

In these models, kernel layers that use drivers bind to
their functions more or less directly. (The degree of
directness and the specific methods depend much on the
specific implementation.) This is as opposed to the
indirect binding models, where driver is expected to provide
first a description what it can do, and binding is done
after that, depending on the description provided.


Chaotic Model

This is not a specific model, but rather a lack of any model
that is designed in advance. Self-made OS most often start
with it, and add a model on a later stage, when more drivers
appear.

Advantages:

The model itself requires no design efforts at all.

No fixed sets of functions to conform to. Every coder is
free to implement whatever they like.

Unlimited upgradeability - a new super-hardware driver is
not bound by a lower common denominator.

Gives theoretically the best performance possible, as no
driver is bound to conform to anything but to the specific
hadrware abilities.

Disadvantages:

Upper layers can rely on nothing with it. As more than one
driver for similar devices (eg. sound cards) adds, upper
layers must check the present drivers for every single
function - which is actually implementing an in-built driver
model. (Where its place is not, and therefore in a rather
clumsy way.)

Summary:

Good for homebrewn OS alikes, and for specific hardware that
is not subject to differencies, eg. some mainframe that may
have only one type of NIC, VDC etc. Otherwise, practically
unusable - the lack of driver systematics severely limits
the kernel internal flexibility. Often upgraded with
functions that identify for each driver what it is capable
of, or requiring some (typically low) common denominator.


Common Denominator Model

With it, hardware drivers are separated in groups - eg. NIC
drivers, sound drivers, IDE drivers. Within a group, all
drivers export the same set of functions.

This set sometimes covers only the minimal set of
functionalities, shared by all hardware in the group - in
this case it acts as a smallest common denominator. Other
possibility is a largest common denominator - to include
functions for all functionalities possible for the group,
and if the specific hardware doesn't support them directly,
to either emulate them, or to signal for an invalid
function. Intermediate denominator levels are possible,
too.

The larger the common denominator, and the less emulation
("bad function" signal instead), the closer the model goes
to the chaotic model.

Advantages:

It requires little model design (esp. the smallest common
denominator types), and as little driver design as possible.
(You may create an excellent design, of course, but you are
not required to.) You can often re-use most of the design
of the other drivers in the group.

It practically doesn't require a plan, or coordination. The
coder just tries either to give the functionality that is
logical (if this is the first driver in a new group), or
tries to give the same functionality that the other drivers
in the group give.

Coupling the driver to the upper levels that use it is very
simple and easy. You practically don't need to check what
driver actually is down there. You know what it can offer,
no matter the hardware, and don't need to check what the
denominator level ac ually is, unlike the chaotic model.

It encapsulates well the hardware groups, and fixes them to
a certain level of development. This decreases the
frequency of the knowledge refresh for the programmers, and
to some extent the need for upper levels rewrite.

Disadvantages:

The common denominator denies to the upper level the exact
access to underlaying hardware functionality, and thus
decreases the performance. With hardware that is below the
denominator line, you risk getting a lot of emulation, which
you potentially could avoid to a large degree on the upper
level (it is often better informed what exactly is desired).
With hardware above the denominator line, you may be denied
access to built-in, hardware-accelerated higher level
functions, that would increase performance and save you
doing everything in your code.

Once the denominator level is fixed, it is hard to move
without seriously impairing the backwards compatibility.
The hardware, however, advances, and offers built-in
upper-level functions and new abilities. Thus, this model
quickly obsoletes its denominator levels (read:
performance and usability).

The larger the common denominator, the more design work the
model requires. (And the quicker it obsoletes, given the
need to keep with the front.)

Summary:

This model is the opposite of the chaotic model. It is
canned and predictable, but non-flexible and with generally
bad performance. Model upgrades are often needed (and done
more rarely, at the expense of losing efficiency), and often
carry major rewrites of other code with them.


Discussion:

These two models are the opposites of the scale. They are
rarely, if ever, used in clear form. Most often, a driver
model will combine them to some extent, falling somewhere in
the middle.

The simplest combination is defining a (typically low)
common denominator, and going chaotic above. While it
theoretically provides both full access to the hardware
abilities and something granted to base on, the granted is
little, and the full access is determinable like with the
chaotic model, in a complex way.

This combination also has some advantages:

Where more flexibility and performance is needed, you may go
closer to the chaotic model. And where more replaceability
and predictability is needed, you may go closer to the CD
model. The result will be a driver model that gives more
assets where they are really needed, and also has more
negatives, but in an area where they aren't that important.

If the optimum for a specific element, eg. driver group,
shifts, you may always make the shift obvious. Then, moving
the model balance for this element will be more readily
accepted by all affected by it.

Another way to combine the models is to break the big
denominator levels into multiple sublevels, and to provide a
way to describe the driver's sublevel, turning this model
into indirect binding type.

All this group of models, however, has a big drawback:
really good replaceability is provided only very close to
the common denominator end of the scale, where flexibiility,
performance, upgradeability and usability already tend to
suffer. Skillful tuning may postpone the negatives to a
degree, but not forever. Attempts to solve this problem are
made by developing driver models with indirect binding.


Indirect binding models:

With this model, drivers are expected to provide first a
description what they can do, and what they cannot. Then,
the code that uses the driver binds to it, using the
description.

Most of these models take the many assets of the chaotic
model as a base, and try to add the good replaceability and
function set predictability of the common denominator model.


Class-like model

In it, the sets of functions that drivers offer are
organized in a class-like manner. Every class has a defined
set of functions. Classes create a hierarchy, like the
classes of OOP languages. (Drivers do not necessarily have
to be written in an OO language, or to be accessed only
from such one.) A class typically implements all functions
found in its predecessor, and adds more (but, unlike OOP
classes, rarely reuses predecessor code).

Classes and their sets of functions are pre-defined, but the
overall model is extendable without changing what is
present. When a new type of device appears, or a new device
offers functionality above the current classes appropriate
for it, a new class may be defined. The description of the
class is created, approved and registered (earlier stages
may be made by a driver writer, later - by a central body),
and is made available to the concerned.

Every driver has a mandatory set of functions that report
the driver class identification. Using them, an upper layer
can quickly define what functionality is present. After
this, the upper layer binds to the driver much like in the
direct binding models.

Advantages:

If properly implemented, gives practically the same
performance as the chaotic model. Additional checking is
performed only once, when the driver is loaded. Class
defining may be fine-grained enough to allow for practically
exact covering of the hardware functionality.

The upgradeability and usability of the specific drivers are
practically the same as those of the chaotic model. And the
model global extendability and upgradeability, if properly
designed, are practically limitless.

If properly designed, gives nearly the same replaceability
as the CD model. (The things to check are more, but much
less than with the chaotic model. What you will find in
each of them is usually well documented. And the check
procedure is standard and simple.)

Disadvantages:

The model itself requires more design and maintenance work
than the direct binding models (except the larger CD
models). (Actually, the amount of maintenance work is the
same as with any CD model, but the work comes before the
need for it is felt by everybody.)

Discussion:

This is probably the best of all driver models I have
examined more carefully. Unhappily, most implementations I
have seen are rather clumsy, to say the least.


Function map model

This model is actually a largest common denominator model,
extended with the ability to provide a map of the
implemented functions. In the simplest case, the map is a
bitspace, where every bit marks whether its function is
implemented. In other cases, the map is a space of accesses
(eg. function pointers).

Advantages:

In some architectures and platforms, this is a very
convenient way to describe a function array.

The model is simple, and therefore easy to use.

Disadvantages:

The model has all disadvantages of a LCD model.

Discussion:

The advantages of the model are relatively little, while the
disadvantages are big. For this reason, it is used mostly
as an addition to another model - eg. to the class-like
model.


Global discussion:

The models list provided here is rather global, This is
intentional: while designing, one must clarify one level at
a time, much like with coding.

The list also is incomplete. For example, I never had the
time to look properly for ideas into the OS/2 SOM, and it is
said to work very well, and provide excellent performance.
Of interest might be also more details of the QNX driver
model. Someone with in-depth knowledge of these might be
able to enhance this list.

----


2004-03-08 03:12:36

by Mike Fedyk

[permalink] [raw]
Subject: Re: A Layered Kernel: Proposal

Grigor Gatchev wrote:
> Here it is. Get your axe ready! :-)
>
> ---
>
> Driver Model Types: A Short Description.
>
> (Note: This is NOT a complete description of a layer,
> according to the kernel layered model I dared to offer. It
> concerns only the hardware drivers in a kernel.)
>

Looks like you're going to need to get a little deeper to keep it from
being OT on this list.

What is the driver designs of say, solaris, OS/2, Win32 (NT & 9x trees)
and how are they good and how are they bad?

What specific (think API changes, nothing generalized here *at all*)
changes could benefit linux, and why and how? Nobody will listen to
hand waving, so you need a tight case for each change.

HTH,

Mike

2004-03-08 12:24:22

by Grigor Gatchev

[permalink] [raw]
Subject: Re: A Layered Kernel: Proposal



On Sun, 7 Mar 2004, Mike Fedyk wrote:

> Grigor Gatchev wrote:
> > Here it is. Get your axe ready! :-)
> >
> > ---
> >
> > Driver Model Types: A Short Description.
> >
> > (Note: This is NOT a complete description of a layer,
> > according to the kernel layered model I dared to offer. It
> > concerns only the hardware drivers in a kernel.)
> >
>
> Looks like you're going to need to get a little deeper to keep it from
> being OT on this list.
>
> What is the driver designs of say, solaris, OS/2, Win32 (NT & 9x trees)
> and how are they good and how are they bad?
>
> What specific (think API changes, nothing generalized here *at all*)
> changes could benefit linux, and why and how? Nobody will listen to
> hand waving, so you need a tight case for each change.
>
> HTH,
>
> Mike

Dear Mike,

An year ago, I was teaching a course on UNIX security. After the
first hour, a student - military man with experience of commanding PC-DOS
users - interrupted me: "What is all that mumbo-jumbo about? Users,
groups, permissions - all this is empty words, noise! Don't you at least
classify your terminal, and issue orders on who uses it? Man, either talk
some real stuff, or I am not wasting anymore my time on you!"

Of course, I was happy to let him "stop wasting his time on me".

Reading some of the posts here, I get this deja vu. I know the driver
designs of some OS, and don't know others. I may waste a month or two of
work, and post a huge description of all big OS driver models that I
know, or waste an year of work, and give you a description of all big OS
driver models. Will this give you anything more than what was already
posted? Wouldn't you read my hundreds of pages, then try to summarise all,
and eventually come to the same?

Or you will try to pick this from one model, that from another, and end up
assembling a creature with eagle wings, dinosaur teeth, anthelope legs and
shark fins, and wondering why it can neither fly nor run nor swim really
well, why it has bad performance? This can't be, I must have misunderstood
you.

Also, does "think API changes, nothing generalised *at all*" mean anything
different from "think code, no design *at all*"? If this is some practical
joke, it is not funny. (I can't believe that a kernel programmer will not
know why design is needed, where is its place in the production of a
software, and how it should and how it should not be done.)

OK. Let's try explain it once more:

While coding, think coding. While designing, think designing. Design comes
before coding; otherwise you design while coding, and produce a mess.
Enough of such an experience, and you start believing that design without
coding is empty words, noise. Hand waving.

What I gave is more than enough to start designing a good driver model.
After the design is OKed, details of implementation, eg. API changes, may
be developed. Developing them now, however, is the wrong time, for the
reasons explained just above. Let's not put the cart ahead of the horse.

Or I am wrong?


2004-03-08 17:40:08

by Theodore Ts'o

[permalink] [raw]
Subject: Re: A Layered Kernel: Proposal

On Mon, Mar 08, 2004 at 02:23:43PM +0200, Grigor Gatchev wrote:
>
> Also, does "think API changes, nothing generalised *at all*" mean anything
> different from "think code, no design *at all*"? If this is some practical
> joke, it is not funny. (I can't believe that a kernel programmer will not
> know why design is needed, where is its place in the production of a
> software, and how it should and how it should not be done.)

So give us a design. Make a concrete proposal. Tell us what the
API's --- with C function prototypes --- should look like, and how we
should migrate what we have to your new beautiful nirvana.

Engineering is the task of trading off various different principals;
in general it is impossible to satisfy all of them 100%. For example,
Mach tried to be highly layered, with strict memory protections to
protect one part of the kernel from another --- all good things, in a
generic sense. Streams tried to be extremely layered as well, and had
a design that a computer science professor would love. Both turned
out to be spectacular failures because their performance sucked like
you wouldn't believe.

Saying "layered programming good" is useless. Sure, we agree. But
it's not the only consideration we have to take into account.
Furthermore, the Linux kernel has a decent amount of layering already,
although it is a pragmatist's sort of layering, where we use it when
it is useful and ignore when it gets in the way. Given your
high-level descriptions, perhaps the best description of what we
currently have in Linux is that it uses a C-based (because no one has
ever bothered to create a C++ obfuscated contest --- it's too easy),
multiple-inheritance model, where each device driver can use
common-denominator code as necessary (from the PCI sublayer, the tty
layer, the SCSI layer, the network layer, etc.), but it is always
possible for each driver to provide specific overrides as necessary
for the hardware.

Is my description of Linux's device model accurate? Sure. Is it
useful in terms of telling us what we ought to do in order to improve
our architecture? Not really. It's just a buzzword-compliant,
high-level description which is great for getting great grades from
clueless C.S. professors that are more in love with theory than
practice. But that's about all it's good for.

In order for it to be at all useful, it has to go to the next level.
So if you would like to tell us how we can do a better job, please
submit a proposal. But it will have to be a detailed one, not one
that is filled with high-level buzzwords and hand-waving.

- Ted

2004-03-08 18:41:55

by Mike Fedyk

[permalink] [raw]
Subject: Re: A Layered Kernel: Proposal

Hi Grigor.

Theodore Ts'o pretty much summed it up, but let me add a couple things...

Grigor Gatchev wrote:
>
> On Sun, 7 Mar 2004, Mike Fedyk wrote:
>
>
>>Grigor Gatchev wrote:
>>
>>>Here it is. Get your axe ready! :-)
>>>
>>>---
>>>
>>>Driver Model Types: A Short Description.
>>>
>>>(Note: This is NOT a complete description of a layer,
>>>according to the kernel layered model I dared to offer. It
>>>concerns only the hardware drivers in a kernel.)
>>>
>>
>>Looks like you're going to need to get a little deeper to keep it from
>>being OT on this list.
>>
>>What is the driver designs of say, solaris, OS/2, Win32 (NT & 9x trees)
>>and how are they good and how are they bad?
>>
>>What specific (think API changes, nothing generalized here *at all*)
>>changes could benefit linux, and why and how? Nobody will listen to
>>hand waving, so you need a tight case for each change.
>>
>>HTH,
>>
>>Mike
>
>
> Dear Mike,
>
> An year ago, I was teaching a course on UNIX security. After the
> first hour, a student - military man with experience of commanding PC-DOS
> users - interrupted me: "What is all that mumbo-jumbo about? Users,
> groups, permissions - all this is empty words, noise! Don't you at least
> classify your terminal, and issue orders on who uses it? Man, either talk
> some real stuff, or I am not wasting anymore my time on you!"
>
> Of course, I was happy to let him "stop wasting his time on me".
>

Yes, I can understand. Dealing with users as a sysadmin isn't much
different.

> Reading some of the posts here, I get this deja vu. I know the driver
> designs of some OS, and don't know others. I may waste a month or two of
> work, and post a huge description of all big OS driver models that I
> know, or waste an year of work, and give you a description of all big OS
> driver models. Will this give you anything more than what was already
> posted? Wouldn't you read my hundreds of pages, then try to summarise all,
> and eventually come to the same?

The descriptions were for my benefit, as whenever you were asked for
specific, you were a little more specific in *design* terms, not code terms.

> Or you will try to pick this from one model, that from another, and end up
> assembling a creature with eagle wings, dinosaur teeth, anthelope legs and
> shark fins, and wondering why it can neither fly nor run nor swim really
> well, why it has bad performance? This can't be, I must have misunderstood
> you.
>
> Also, does "think API changes, nothing generalised *at all*" mean anything
> different from "think code, no design *at all*"? If this is some practical
> joke, it is not funny. (I can't believe that a kernel programmer will not
> know why design is needed, where is its place in the production of a
> software, and how it should and how it should not be done.)
>

No. The designs you are talking about are *far* too generalized.
Meaning, every proposal on this list that gets anywhere is for code
modifications. You need to have your *design* proposal in *code* speak.
Anything else will just be called hand waving.

> OK. Let's try explain it once more:
>
> While coding, think coding. While designing, think designing. Design comes
> before coding; otherwise you design while coding, and produce a mess.
> Enough of such an experience, and you start believing that design without
> coding is empty words, noise. Hand waving.
>
> What I gave is more than enough to start designing a good driver model.
> After the design is OKed, details of implementation, eg. API changes, may
> be developed. Developing them now, however, is the wrong time, for the
> reasons explained just above. Let's not put the cart ahead of the horse.
>

I'm not a kernel programmer.

> Or I am wrong?

Maybe. If you can have you design describe the current linux model as
"in order to write to this SCSI disk I need to call foo(),..." and
describe your new change as "if foo() called bar() I could do...".

Basically, you need to describe everything in programming terms. A
great example is the BIO design document from Jens Axboe. That's an
example of a design before code project (but I only happened to hear
about it when 2.5.1 came out, and the code was ready to be merged...) .

Mike

2004-03-08 20:43:15

by Al Viro

[permalink] [raw]
Subject: Re: A Layered Kernel: Proposal

On Mon, Mar 08, 2004 at 12:39:40PM -0500, Theodore Ts'o wrote:

> So give us a design. Make a concrete proposal. Tell us what the
> API's --- with C function prototypes --- should look like, and how we
> should migrate what we have to your new beautiful nirvana.

But... but... that requires *work*! How can you demand that?!?

To original toss^Wposter: it's an old observation that in order to be
useful hypothesis has to be falsifiable. Similar principle applies
to design proposals - to be worth of any attention they have to be
detailed enough to allow meaningful criticism.

What you have done so far is equivalent to coming to a hospital and
saying "aseptic good, infection bad". That would get pretty much
the same reactions, varying from "yes, we know" to "do you have any
specific suggestions?" and "stop wasting our time"[1].

In short: get lost and do not come back until you have something less
vague.

[1] If you are insistent enough, you might also earn a free referral
to psychiatrist. You would have to work harder than you have done
so far, though...

2004-03-08 21:35:33

by Paul Jackson

[permalink] [raw]
Subject: Re: A Layered Kernel: Proposal

> While coding, think coding. While designing, think designing. Design comes
> before coding; otherwise you design while coding, and produce a mess.

You are describing, roughly, the waterfall model of software development.

Linux kernel work is closer to something resembling the prototype
and/or spiral model.

See further explanations of these terms, for instance, at:

http://model.mercuryinteractive.com/references/models/

But, in any case, Linux kernel work _does_ have a rather extensively
articulated development model which we find is working rather well,
thank-you. For all I know, this methodology was defined by some
traumatic event at the birth of Linus - whatever - seems to work.

When in Rome, do as the Romans. And especially don't be surprised
at being pushed aside if you protest that we aren't behaving as the
French.

--
I won't rest till it's the best ...
Programmer, Linux Scalability
Paul Jackson <[email protected]> 1.650.933.1373

2004-03-09 19:22:04

by Grigor Gatchev

[permalink] [raw]
Subject: Re: A Layered Kernel: Proposal



On Mon, 8 Mar 2004, Theodore Ts'o wrote:

> On Mon, Mar 08, 2004 at 02:23:43PM +0200, Grigor Gatchev wrote:
> >
> > Also, does "think API changes, nothing generalised *at all*" mean anything
> > different from "think code, no design *at all*"? If this is some practical
> > joke, it is not funny. (I can't believe that a kernel programmer will not
> > know why design is needed, where is its place in the production of a
> > software, and how it should and how it should not be done.)
>
> So give us a design. Make a concrete proposal. Tell us what the
> API's --- with C function prototypes --- should look like, and how we
> should migrate what we have to your new beautiful nirvana.

Aside from the beautiful nirvana irony, that is what I am trying not to
do. For the following reason:

When you have to produce a large software, you first design it at a global
level. Then you design the subsystems. After that, their subsystems. And
so on, level after level. At last, you design the code itself. Only then
you write the code actually.

This is opposed to the model where you start with the code. You write a
piece, then you see it's not quite mathing the whole. You design the
group, then rewrite your code, and a dozen of other pieces to match the
design. That's nice, but on the upper level it's still not quite matching.
And so on, to the top. (Actually, a coder with some sense of design would
start as up the chain as he can - but with this design model that is never
very high. And the higher the level, the fewer go to it.)

The Linux kernel is the best example I have ever seen for the second
approach, as a result. It has good code implementation, often excellent.
On lowest level, its design is good. But the higher you go, the more mucky
the design becomes. On the topmost levels, there is only as much design as
absolutely needed by the whole not to fall apart.

This has a reason - the origin and appearance of Linux. This is the way
most fan-initiated and supported systems go. And this cannot be really
changed completely - neither it needs to be. However, a little more design
on the more global levels would be of good use, I think.

The problem is, that the current culture of kernel writing favors the
"code proof". This is also understandable and natural, given the way Linux
has developed. If you can't supply a real code to beat in performance the
current one, you will get enough only of doubters.

However, the more global a conception, the larger is the amount of coding
needed to prove it in code. For example, designing a drivers group would
need only a driver or two written, or even ported, to show its actual
assets. A complete kernel driver model would need at least a dozen of
drivers (re)written, and a lot of other code changes made. A complete
layered model of the kernel, what I dared to offer initially, would
require rewriting of at least 10% of the kernel code - that is over 200
000 lines, AFAIK, to supply a working system to prove anything. This is
very clearly over the abilities of a single programmer (or, at least,
well over mine.)

Well, kernel coders are not idiots. Don't write all the code, you say,
just show me the function prototypes, that's enough. For designing a
group of drivers, that's not that hard. For an entire driver model it may
also be well over a single programmer's abilities. (BTW, what if this
programmer doesn't know C? He is by definition a bad software designer?)

In short: that is why people sometimes talk design without talking code.
Human emotions are essentially brain chemistry, which is essentially
nuclear physics - but no sane person would try to evaluate the emotions
level by asking for examples in nuclear physics level. And any
mathematician that tries to explain Kantorian numbers in apples and pears
would be ridiculed (and must be a genius to succeed at all).

I understand well that, given the kernel history, this may look like a
hand waving for you. I have seen a building worker who sincerely believed
that architects are empty talkers who have never laid a brick over brick,
and dumping all of them would be the best thing to do. However, if you
continue being convinced in that, the kernel will remain underdesigned in
more global levels. I may be a brainless twit, troll, useless C.S.
professor, anything like - but the problem is real. I may go, but unless
properly addressed, the problem will stay.

> Engineering is the task of trading off various different principals;
> in general it is impossible to satisfy all of them 100%. For example,
> Mach tried to be highly layered, with strict memory protections to
> protect one part of the kernel from another --- all good things, in a
> generic sense. Streams tried to be extremely layered as well, and had
> a design that a computer science professor would love. Both turned
> out to be spectacular failures because their performance sucked like
> you wouldn't believe.

I would believe it very well - have tested on my own PC some code from the
STREAMS project. :-) It really was overlayered, and I remember vaguely
something like that they were doing almost the same (resource-expensive)
check in _every_ possible layer to some request I tracked, maybe nearly a
dozen of times. In a sense, STREAMS was exactly the opposite of Linux -
well-designed on global level, but the more concrete the level, the worse
the design. No wonder at all its performance sucked... However, I do
believe that Linux could also have a design that a computer science
professor would love, not at the price of performance.

> Saying "layered programming good" is useless. Sure, we agree.

Really? I see a lot of disagreement around on this.

> But it's not the only consideration we have to take into account.
> Furthermore, the Linux kernel has a decent amount of layering already,
> although it is a pragmatist's sort of layering, where we use it when
> it is useful and ignore when it gets in the way. Given your
> high-level descriptions, perhaps the best description of what we
> currently have in Linux is that it uses a C-based (because no one has
> ever bothered to create a C++ obfuscated contest --- it's too easy),
> multiple-inheritance model, where each device driver can use
> common-denominator code as necessary (from the PCI sublayer, the tty
> layer, the SCSI layer, the network layer, etc.), but it is always
> possible for each driver to provide specific overrides as necessary
> for the hardware.
>
> Is my description of Linux's device model accurate? Sure. Is it
> useful in terms of telling us what we ought to do in order to improve
> our architecture? Not really. It's just a buzzword-compliant,
> high-level description which is great for getting great grades from
> clueless C.S. professors that are more in love with theory than
> practice. But that's about all it's good for.

You mean that, for example, drawing house elements on paper, and tossing
some numbers here and back is useless in deciding how to build a house,
and only getting hands to the real bricks gives you whether this wall
should be here, and whether that column is thick enough to survive through
a quake?

Thanks, Ted. I think I am getting it, finally.

Unhappily, what I am getting is the clear idea that you are a team of
geniuses. Or that I am a hopeless idiot, or maybe both. In any way, I
am nowhere near you, and only waste your time now.

Sorry.

> In order for it to be at all useful, it has to go to the next level.
> So if you would like to tell us how we can do a better job, please
> submit a proposal. But it will have to be a detailed one, not one
> that is filled with high-level buzzwords and hand-waving.
>
> - Ted


2004-03-09 20:49:55

by Timothy Miller

[permalink] [raw]
Subject: Re: A Layered Kernel: Proposal



Grigor Gatchev wrote:
>
> On Mon, 8 Mar 2004, Theodore Ts'o wrote:
>
>
>>On Mon, Mar 08, 2004 at 02:23:43PM +0200, Grigor Gatchev wrote:
>>
>>>Also, does "think API changes, nothing generalised *at all*" mean anything
>>>different from "think code, no design *at all*"? If this is some practical
>>>joke, it is not funny. (I can't believe that a kernel programmer will not
>>>know why design is needed, where is its place in the production of a
>>>software, and how it should and how it should not be done.)
>>
>>So give us a design. Make a concrete proposal. Tell us what the
>>API's --- with C function prototypes --- should look like, and how we
>>should migrate what we have to your new beautiful nirvana.
>
>
> Aside from the beautiful nirvana irony, that is what I am trying not to
> do. For the following reason:
>
> When you have to produce a large software, you first design it at a global
> level. Then you design the subsystems. After that, their subsystems. And
> so on, level after level. At last, you design the code itself. Only then
> you write the code actually.
>
[snip]

As one of the people who has been told "show me the code" before, let me
try to help you understand what the kernel developers are asking of you.

First of all, they are NOT asking you to do the bottom-up approach that
you seem to think they're asking for. They're not asking you to show
them code which was not the result of careful design. No. Indeed, they
all agree with you that careful planning is always a good idea, in fact
critical.

Rather, what they are asking you to do is to create the complete
top-down design _yourself_ and then show it to them. If you do a
complete design that is well-though-out and complete, then code (ie.
function prototypes) will naturally and easily fall out from that.
Present your design and the resultant code for evaluation.

Only then can kernel developers give you meaningful feedback. You'll
notice that the major arguments aren't about your design but rather
about there being a lack of anything to critique. If you want feedback,
you must produce something which CAN be critiqued.

Follow the scientific method:
1) Construct a hypothesis (the document you have already written plus
more detail).
2) Develop a means to test your hypothesis (write the code that your
design implies).
3) Test your hypothesis (present your code and design for criticism).
4) If your hypothesis is proven wrong (someone has a valid criticism),
adjust the hypothesis and then goto step (2).

Perhaps you have not done this because you feel that your "high level"
design (which you have presented) is not complete. The problem is that,
based on what you have presented, no one can help you complete it.
Therefore, the thing to do is to complete it yourself, right or wrong.
Only when you have actually done something which is wrong can you
actually go about doing things correctly. Actually wrong is better than
hypothetically correct.

Then, you may be thinking that this will result in more work, because
you'll create a design and write come code just to find out that it
needs to be rewritten. But this would be poor reasoning. It would be
extremely unrealistic to think that you could create a design a priori
that was good and correct, before you've ever done anything to test its
implications.

Mostly likely, you would go through several iterations of your spec and
the implied code before it's acceptable to anyone, regardless of how
good it is to begin with. Just think about how many iterations Con and
Nick have gone through for their interativity schedulers; they've had
countless good ideas, but only experimentation and user criticism could
tell them what really worked and what didn't. And these are just the
schedulers -- you're talking about the architecture of the whole kernel!


2004-03-09 23:24:32

by Mike Fedyk

[permalink] [raw]
Subject: Re: A Layered Kernel: Proposal

Grigor Gatchev wrote:
>
> On Mon, 8 Mar 2004, Theodore Ts'o wrote:
>>Is my description of Linux's device model accurate? Sure. Is it
>>useful in terms of telling us what we ought to do in order to improve
>>our architecture? Not really. It's just a buzzword-compliant,
>>high-level description which is great for getting great grades from
>>clueless C.S. professors that are more in love with theory than
>>practice. But that's about all it's good for.
>
>
> You mean that, for example, drawing house elements on paper, and tossing
> some numbers here and back is useless in deciding how to build a house,
> and only getting hands to the real bricks gives you whether this wall
> should be here, and whether that column is thick enough to survive through
> a quake?

You are saying you should build a house with walls. They're asking for
a schematic (function prototypes), not to build the house yourself (not
to code it all yourself).

Oh yeah and what Timothy Miller said! :-D

Mike