2005-09-01 04:00:53

by Keith Packard

[permalink] [raw]
Subject: Re: State of Linux graphics

On Wed, 2005-08-31 at 18:58 -0700, Allen Akin wrote:
> On Wed, Aug 31, 2005 at 02:06:54PM -0700, Keith Packard wrote:
> | On Wed, 2005-08-31 at 13:06 -0700, Allen Akin wrote:
> | > ...
> |
> | Right, the goal is to have only one driver for the hardware, whether an
> | X server for simple 2D only environments or a GL driver for 2D/3D
> | environments. ...
>
> I count two drivers there; I was hoping the goal was for one. :-)

Yeah, two systems, but (I hope) only one used for each card. So far, I'm
not sure of the value of attempting to provide a mostly-software GL
implementation in place of existing X drivers.

> | ... I think the only questions here are about the road from
> | where we are to that final goal.
>
> Well there are other questions, including whether it's correct to
> partition the world into "2D only" and "2D/3D" environments. There are
> many disadvantages and few advantages (that I can see) for doing so.

I continue to work on devices for which 3D isn't going to happen. My
most recent window system runs on a machine with only 384K of memory,
and yet supports a reasonable facsimile of a linux desktop environment.

In the 'real world', we have linux machines continuing to move
"down-market" with a target price of $100. At this price point, it is
reasonable to look at what are now considered 'embedded' graphics
controllers with no acceleration other than simple copies and fills.

Again, the question is whether a mostly-software OpenGL implementation
can effectively compete against the simple X+Render graphics model for
basic 2D application operations, and whether there are people interested
in even trying to make this happen.

> | ... However, at the
> | application level, GL is not a very friendly 2D application-level API.
>
> The point of OpenGL is to expose what the vast majority of current
> display hardware does well, and not a lot more. So if a class of apps
> isn't "happy" with the functionality that OpenGL provides, it won't be
> happy with the functionality that any other low-level API provides. The
> problem lies with the hardware.

Not currently; the OpenGL we have today doesn't provide for
component-level compositing or off-screen drawable objects. The former
is possible in much modern hardware, and may be exposed in GL through
pixel shaders, while the latter spent far too long mired in the ARB and
is only now on the radar for implementation in our environment.

Off-screen drawing is the dominant application paradigm in the 2D world,
so we can't function without it while component-level compositing
provides superior text presentation on LCD screens, which is an
obviously increasing segment of the market.

> Conversely, if the apps aren't taking advantage of the functionality
> OpenGL provides, they're not exploiting the opportunities the hardware
> offers. Of course I'm not saying all apps *must* use all of OpenGL;
> simply that their developers should be aware of exactly what they're
> leaving on the table. It can make the difference between an app that's
> run-of-the-mill and one that's outstanding.

Most 2D applications aren't all about the presentation on the screen;
right now, we're struggling to just get basic office functionality
provided to the user. The cairo effort is more about making applications
portable to different window systems and printing systems than it is
about bling, although the bling does have a strong pull for some
developers.

So, my motivation for moving to GL drivers is far more about providing
drivers for closed source hardware and reducing developer effort needed
to support new hardware than it is about making the desktop graphics
faster or more fancy.

> "Friendliness" is another matter, and it makes a ton of sense to package
> common functionality in an easier-to-use higher-level library that a lot
> of apps can share. In this discussion my concern isn't with Cairo, but
> with the number and type of back-end APIs we (driver developers and
> library developers and application developers) have to support.

Right, again the goal is to have only one driver per video card. Right
now we're not there, and the result is that the GL drivers take a back
seat in most environments to the icky X drivers that are required to
provide simple 2D graphics. That's not a happy place to be, and we do
want to solve that as soon as possible.

> | ... GL provides
> | far more functionality than we need for 2D applications being designed
> | and implemented today...
>
> With the exception of lighting, it seems to me that pretty much all of
> that applies to today's "2D" apps. It's just a myth that there's "far
> more" functionality in OpenGL than 2D apps can use. (Especially for
> OpenGL ES, which eliminates legacy cruft from full OpenGL.)

The bulk of 2D applications need to paint solid rectangles, display a
couple of images with a bit of scaling and draw some text. All of the
reset of the 3D pipeline is just standing around and watching.

> | ... picking the right subset and sticking to that is
> | our current challenge.
>
> That would be fine with me. I'm more worried about what Render (plus
> EXA?) represents -- a second development path with the actual costs and
> opportunity costs I've mentioned before, and if apps become wedded to it
> (rather than to a higher level like Cairo), a loss of opportunity to
> exploit new features and better performance at the application level.

I'm not concerned about that; glitz already provides an efficient
mapping directly from the Render semantics to OpenGL. And, Render is
harder to use than OpenGL in most ways, encouraging applications to use
an abstraction layer like cairo which can also easily support GL
directly.

> | ...The integration of 2D and 3D acceleration into a
> | single GL-based system will take longer, largely as we wait for the GL
> | drivers to catch up to the requirements of the Xgl implementation that
> | we already have.
>
> Like Jon, I'm concerned that the focus on Render and EXA will
> simultaneously take resources away from and reduce the motivation for
> those drivers.

Neither of us gets to tell people what code they write, and right now a
developer can either spend a week or so switching an XFree86 driver to
EXA and have decent Render-based performance or they can stand around
and wait for 'some one else' to fix the Mesa/DRI drivers so that we can
then port Xgl to them. Given the 'good performance in one week' vs
'stand around and hope GL gets fast enough', it's not hard to see why
people are interested in EXA drivers.

Plus, it's not all bad -- we're drawing in new developers who are
learning about how graphics chips work at a reasonably low level and who
may become interested enough to go help with the GL drivers. And, I'm
seeing these developers face up to some long-standing DRI issues
surrounding memory management. EXA on DRM (the only reasonable EXA
architecture in my mind) has all of the same memory management issues
that DRI should be facing to provide FBO support. Having more eyes and
brains looking at this problem can't hurt.

> | I'm not sure we have any significant new extensions to create here;
> | we've got a pretty good handle on how X maps to GL and it seems to work
> | well enough with suitable existing extensions.
>
> I'm glad to hear it, though a bit surprised.

As I said, glitz shows that given the existing standards, some of which
are implemented only in closed-source drivers, we can provide Render
semantics in an accelerated fashion.

> | This will be an interesting area of research; right now, 2D applications
> | are fairly sketchy about the structure of their UIs, so attempting to
> | wrap them into more structured models will take some effort.
>
> Game developers have done a surprising amount of work in this area, and
> I know of one company deploying this sort of UI on graphics-accelerated
> cell phones. So some practical experience exists, and we should find a
> way to tap into it.

Right, it will be interesting to see how this maps into existing 2D
toolkits. I suspect that the effects will be reflected up to
applications, which won't necessarily be entirely positive.

> | Certainly ensuring that cairo on glitz can be used to paint into an
> | arbitrary GL context will go some ways in this direction.
>
> Yep, that's essential.
>
> | ...So far, 3D driver work has proceeded almost entirely on the
> | newest documented hardware that people could get. Going back and
> | spending months optimizing software 3D rendering code so that it works
> | as fast as software 2D code seems like a thankless task.
>
> Jon's right about this: If you can accelerate a given simple function
> (blending, say) for a 2D driver, you can accelerate that same function
> in a Mesa driver for a comparable amount of effort, and deliver a
> similar benefit to apps. (More apps, in fact, since it helps
> OpenGL-based apps as well as Cairo-based apps.)

Yes, you *can*, but the amount of code needed to perform simple
pixel-aligned upright blends is a tiny fraction of that needed to deal
with filtering textures and *then* blending. All of the compositing code
needed for the Render extension, including accelerated (MMX) is
implemented in 10K LOC. Optimizing a new case generally involves writing
about 50 lines of code or so.

> So long as people are encouraged by word and deed to spend their time on
> "2D" drivers, Mesa drivers will be further starved for resources and the
> belief that OpenGL has nothing to offer "2D" apps will become
> self-fulfilling.

I'm certainly not encouraging them; the exa effort was started by
engineers at Trolltech and then spread to the x.org minions. Again, the
only encouragement they really need is the simple fact that a few days
work yields a tremendous improvement in functionality. You can't get
that with a similar effort on the GL front at this pont.

> | So, I believe applications will target the Render API for the
> | foreseeable future. ...
>
> See above. :-)

I'll consider Xgl a success if it manages to eliminate 2D drivers from
machines capable of supporting OpenGL. Even if the bulk of applications
continue to draw using Render and that is translated by X to OpenGL, we
will at least have eliminated a huge duplication of effort between 2D
and 3D driver development, provided far better acceleration than we have
today for Render operations and made it possible for the nasty
closed-source vendors to ship working drivers for their latest video
cards.

I see the wider availability of OpenGL APIs to be a nice side-effect at
this point; applications won't (yet) be able to count on having decent
OpenGL performance on every desktop, but as the number of desktops with
OpenGL support grows, we will see more and more applications demanding
it and getting people to make purchasing decisions based on OpenGL
availablity for such applications.

Resources for Mesa development are even more constrained than X
development, but both of these show signs of improvement, both for
social and political reasons within the community as well as economic
reasons within corporations. Perhaps someday we really will have enough
resources that watching a large number of people get side-tracked with
the latest shiny objects won't bother us quite so much.

-keith


Attachments:
signature.asc (189.00 B)
This is a digitally signed message part

2005-09-01 15:26:04

by Brian Paul

[permalink] [raw]
Subject: Re: State of Linux graphics

Just a few comments...

Keith Packard wrote:

> Again, the question is whether a mostly-software OpenGL implementation
> can effectively compete against the simple X+Render graphics model for
> basic 2D application operations, and whether there are people interested
> in even trying to make this happen.

I don't know of anyone who's writen a "2D-centric" Mesa driver, but
it's feasible. The basic idea would be to simply fast-path a handful
of basic OpenGL paths that correspond to the basic X operations:

1. Solid rect fill: glScissor + glClear
2. Blit/copy: glCopyPixels
3. Monochrome glyphs: glBitmap
4. PutImage: glDrawPixels

Those OpenGL commands could be directly implemented with whatever
mechanism is used in conventional X drivers. I don't think the
overhead of going through the OpenGL/Mesa API would be significant.

If Xgl used those commands and you didn't turn on fancy blending, etc.
the performance should be fine. If the hardware supported blending,
that could be easily exposed too. The rest of OpenGL would go through
the usual software paths (slow, but better than nothing).

It might be an interesting project for someone. After one driver was
done subsequent ones should be fairly easy.


>>| ... However, at the
>>| application level, GL is not a very friendly 2D application-level API.
>>
>>The point of OpenGL is to expose what the vast majority of current
>>display hardware does well, and not a lot more. So if a class of apps
>>isn't "happy" with the functionality that OpenGL provides, it won't be
>>happy with the functionality that any other low-level API provides. The
>>problem lies with the hardware.
>
>
> Not currently; the OpenGL we have today doesn't provide for
> component-level compositing or off-screen drawable objects. The former
> is possible in much modern hardware, and may be exposed in GL through
> pixel shaders, while the latter spent far too long mired in the ARB and
> is only now on the radar for implementation in our environment.
>
> Off-screen drawing is the dominant application paradigm in the 2D world,
> so we can't function without it while component-level compositing
> provides superior text presentation on LCD screens, which is an
> obviously increasing segment of the market.

Yeah, we really need to make some progress with off-screen rendering
in our drivers (either Pbuffers or renderbuffers). I've been working
on renderbuffers but we really need that overdue memory manager.


>>Jon's right about this: If you can accelerate a given simple function
>>(blending, say) for a 2D driver, you can accelerate that same function
>>in a Mesa driver for a comparable amount of effort, and deliver a
>>similar benefit to apps. (More apps, in fact, since it helps
>>OpenGL-based apps as well as Cairo-based apps.)
>
> Yes, you *can*, but the amount of code needed to perform simple
> pixel-aligned upright blends is a tiny fraction of that needed to deal
> with filtering textures and *then* blending. All of the compositing code
> needed for the Render extension, including accelerated (MMX) is
> implemented in 10K LOC. Optimizing a new case generally involves writing
> about 50 lines of code or so.

If the blending is for screen-aligned rects, glDrawPixels would be a
far easier path to optimize than texturing. The number of state
combinations related to texturing is pretty overwhelming.


Anyway, I think we're all in agreement about the desirability of
having a single, unified driver in the future.

-Brian

2005-09-01 15:45:52

by Alan

[permalink] [raw]
Subject: Re: State of Linux graphics

On Iau, 2005-09-01 at 09:24 -0600, Brian Paul wrote:
> If the blending is for screen-aligned rects, glDrawPixels would be a
> far easier path to optimize than texturing. The number of state
> combinations related to texturing is pretty overwhelming.

As doom showed however once you can cut down some of the combinations
particularly if you know the texture orientation is limited you can
really speed it up.

Blending is going to end up using textures onto flat surfaces facing the
viewer which are not rotated or skewed.

2005-09-01 15:59:33

by Jim Gettys

[permalink] [raw]
Subject: Re: State of Linux graphics

On Thu, 2005-09-01 at 09:24 -0600, Brian Paul wrote:

>
> If the blending is for screen-aligned rects, glDrawPixels would be a
> far easier path to optimize than texturing. The number of state
> combinations related to texturing is pretty overwhelming.
>
>
> Anyway, I think we're all in agreement about the desirability of
> having a single, unified driver in the future.
>

Certainly for most hardware in the developed world I think we all agree
with this. The argument is about when we get to one driver model, not if
we get there, and not what the end state is.

In my view, the battle is on legacy systems and the very low end, not in
hardware we hackers use that might run Windows Vista or Mac OS X....

I've had the (friendly) argument with Allen Akin for 15 years that due
to reduction of hardware costs we can't presume OpenGL. Someday, he'll
be right, and I'll be wrong. I'm betting I'll be right for a few more
years, and I nothing would tickle me pink more to lose the argument
soon...

Legacy hardware and that being proposed/built for the developing world
is tougher; we have code in hand for existing chips, and the price point
is even well below cell phones on those devices. They don't have
anything beyond basic blit and, miracles of miracles, alpha blending.
These are built on one or two generation back fabs, again for cost.
And as there are no carriers subsidizing the hardware cost, the real
hardware cost has to be met, at very low price points. They don't come
with the features Allen admires in the latest cell phone chips.

I think the onus of proof that we can immediately completely ditch a
second driver framework in favor of everything being OpenGL, even a
software tuned one, is in my view on those who claim that is viable.
Waving one's hands and claiming there are 100 kbyte closed source
OpenGL/ES implementations doesn't cut it in my view, given where we are
today with the code we already have in hand. So far, the case hasn't
been made.

Existence proof that we're wrong and can move *entirely* to OpenGL
sooner rather than later would be gratefully accepted..
Regards,
Jim


2005-09-01 16:05:47

by Brian Paul

[permalink] [raw]
Subject: Re: State of Linux graphics

Alan Cox wrote:
> On Iau, 2005-09-01 at 09:24 -0600, Brian Paul wrote:
>
>>If the blending is for screen-aligned rects, glDrawPixels would be a
>>far easier path to optimize than texturing. The number of state
>>combinations related to texturing is pretty overwhelming.
>
>
> As doom showed however once you can cut down some of the combinations
> particularly if you know the texture orientation is limited you can
> really speed it up.
>
> Blending is going to end up using textures onto flat surfaces facing the
> viewer which are not rotated or skewed.

Hi Alan,

It's other (non-orientation) texture state I had in mind:

- the texel format (OpenGL has over 30 possible texture formats).
- texture size and borders
- the filtering mode (linear, nearest, etc)
- coordinate wrap mode (clamp, repeat, etc)
- env/combine mode
- multi-texture state

It basically means that the driver may have to do state checks similar
to this to determine if it can use optimized code. An excerpt from Mesa:

if (ctx->Texture._EnabledCoordUnits == 0x1
&& !ctx->FragmentProgram._Active
&& ctx->Texture.Unit[0]._ReallyEnabled == TEXTURE_2D_BIT
&& texObj2D->WrapS == GL_REPEAT
&& texObj2D->WrapT == GL_REPEAT
&& texObj2D->_IsPowerOfTwo
&& texImg->Border == 0
&& texImg->Width == texImg->RowStride
&& (format == MESA_FORMAT_RGB || format == MESA_FORMAT_RGBA)
&& minFilter == magFilter
&& ctx->Light.Model.ColorControl == GL_SINGLE_COLOR
&& ctx->Texture.Unit[0].EnvMode != GL_COMBINE_EXT) {
if (ctx->Hint.PerspectiveCorrection==GL_FASTEST) {
if (minFilter == GL_NEAREST
&& format == MESA_FORMAT_RGB
&& (envMode == GL_REPLACE || envMode == GL_DECAL)
&& ((swrast->_RasterMask == (DEPTH_BIT | TEXTURE_BIT)
&& ctx->Depth.Func == GL_LESS
&& ctx->Depth.Mask == GL_TRUE)
|| swrast->_RasterMask == TEXTURE_BIT)
&& ctx->Polygon.StippleFlag == GL_FALSE
&& ctx->Visual.depthBits <= 16) {
if (swrast->_RasterMask == (DEPTH_BIT | TEXTURE_BIT)) {
USE(simple_z_textured_triangle);
}
else {
USE(simple_textured_triangle);
}
}
[...]

That's pretty ugly. Plus the rasterization code for textured
triangles is fairly complicated.

But the other significant problem is the application has to be sure it
has set all the GL state correctly so that the fast path is really
used. If it gets one thing wrong, you may be screwed. If different
drivers optimize slightly different paths, that's another problem.

glDrawPixels would be simpler for both the implementor and user.

-Brian

2005-09-01 16:42:12

by Andreas Hauser

[permalink] [raw]
Subject: Re: State of Linux graphics

jg wrote @ Thu, 01 Sep 2005 11:59:33 -0400:

> Legacy hardware and that being proposed/built for the developing world
> is tougher; we have code in hand for existing chips, and the price point
> is even well below cell phones on those devices. They don't have
> anything beyond basic blit and, miracles of miracles, alpha blending.
> These are built on one or two generation back fabs, again for cost.
> And as there are no carriers subsidizing the hardware cost, the real
> hardware cost has to be met, at very low price points. They don't come
> with the features Allen admires in the latest cell phone chips.

So you suggest, that we, that have capable cards, which can be had for
< 50 Euro here, see that we find something better than X.org to run
on them because X.org is concentrating on < 10 Euro chips?
Somehow i always thought that older xfree86 trees were just fine for them.

Andy

2005-09-01 17:22:16

by Ian Romanick

[permalink] [raw]
Subject: Re: State of Linux graphics

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Brian Paul wrote:

> It's other (non-orientation) texture state I had in mind:
>
> - the texel format (OpenGL has over 30 possible texture formats).
> - texture size and borders
> - the filtering mode (linear, nearest, etc)
> - coordinate wrap mode (clamp, repeat, etc)
> - env/combine mode
> - multi-texture state

Which is why it's such a good target for code generation. You'd
generate the texel fetch routine, use that to generate the wraped texel
fetch routine, use that to generate the filtered texel fetch routine,
use that to generate the env/combine routines.

Once-upon-a-time I had the first part and some of the second part
written. Doing just that little bit was slightly faster on a Pentium 3
and slightly slower on a Pentium 4. I suspect the problem was that I
wasn't caching the generated code smart enough, so it was it trashing
the CPU cache. The other problem is that, in the absence of an
assembler in Mesa, it was really painful to change the code stubs.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.6 (GNU/Linux)

iD8DBQFDFziUX1gOwKyEAw8RAhmFAJ9QJ7RTrB2dHV/hwb8ktwLyqKSM4wCdGtbS
b0A2N2jFcLeg8HRm53jMyrI=
=Ygkd
-----END PGP SIGNATURE-----

2005-09-01 17:27:09

by Keith Whitwell

[permalink] [raw]
Subject: Re: State of Linux graphics

Ian Romanick wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Brian Paul wrote:
>
>
>>It's other (non-orientation) texture state I had in mind:
>>
>>- the texel format (OpenGL has over 30 possible texture formats).
>>- texture size and borders
>>- the filtering mode (linear, nearest, etc)
>>- coordinate wrap mode (clamp, repeat, etc)
>>- env/combine mode
>>- multi-texture state
>
>
> Which is why it's such a good target for code generation. You'd
> generate the texel fetch routine, use that to generate the wraped texel
> fetch routine, use that to generate the filtered texel fetch routine,
> use that to generate the env/combine routines.
>
> Once-upon-a-time I had the first part and some of the second part
> written. Doing just that little bit was slightly faster on a Pentium 3
> and slightly slower on a Pentium 4. I suspect the problem was that I
> wasn't caching the generated code smart enough, so it was it trashing
> the CPU cache. The other problem is that, in the absence of an
> assembler in Mesa, it was really painful to change the code stubs.

Note that the last part is now partially addressed at least - Mesa has
an integrated and simple runtime assembler for x86 and sse. There are
some missing pieces and rough edges, but it's working and useful as it
stands.

Keith

2005-09-01 20:03:08

by Allen Akin

[permalink] [raw]
Subject: Re: State of Linux graphics

On Wed, Aug 31, 2005 at 08:59:23PM -0700, Keith Packard wrote:
|
| Yeah, two systems, but (I hope) only one used for each card. So far, I'm
| not sure of the value of attempting to provide a mostly-software GL
| implementation in place of existing X drivers.

For the short term it's valuable for the apps that use OpenGL directly.
Games, of course, on platforms from cell-phone/PDA complexity up; also
things like avatar-based user interfaces. On desktop platforms, plenty
of non-game OpenGL-based apps exist in the Windows world and I'd expect
those will migrate to Linux as the Linux desktop market grows enough to
be commercially viable. R128-class hardware is fast enough to be useful
for many non-game apps.

For the long term, you have to decide how likely it is that demands for
new functionality on old platforms will arise. Let's assume for the
moment that they do. If OpenGL is available, we have the option to use
it. If OpenGL isn't available, we have to go through another iteration
of the process we're in now, and grow Render (or some new extensions)
with consequent duplication of effort and/or competition for resources.

| I continue to work on devices for which 3D isn't going to happen. My
| most recent window system runs on a machine with only 384K of memory...

I'm envious -- sounds like a great project. But such systems aren't
representative of the vast majority of hardware for which we're building
Render and EXA implementations today. (Nor are they representative of
the hardware on which most Gnome or KDE apps would run, I suspect.) I
question how much influence should they have over our core graphics
strategy.

| Again, the question is whether a mostly-software OpenGL implementation
| can effectively compete against the simple X+Render graphics model for
| basic 2D application operations...

I think it's pretty clear that it can, since the few operations we want
to accelerate already fit within the OpenGL framework.

(I just felt a bit of deja vu over this -- I heard eerily similar
arguments from Microsoft when the first versions of Direct3D were
created.)

| ...and whether there are people interested
| in even trying to make this happen.

In the commercial world people believe such a thing is valuable, and
it's already happened. (See, for example,
http://www.hybrid.fi/main/esframework/tools.php).

Why hasn't it happened in the Open Source world? Well, I'd argue it's
largely because we chose to put our limited resources behind projects
inside the X server instead.

| > The point of OpenGL is to expose what the vast majority of current
| > display hardware does well, and not a lot more. So if a class of apps
| > isn't "happy" with the functionality that OpenGL provides, it won't be
| > happy with the functionality that any other low-level API provides. The
| > problem lies with the hardware.
|
| Not currently; the OpenGL we have today doesn't provide for
| component-level compositing or off-screen drawable objects. The former
| is possible in much modern hardware, and may be exposed in GL through
| pixel shaders, while the latter spent far too long mired in the ARB and
| is only now on the radar for implementation in our environment.

Component-level compositing: Current and past hardware doesn't support
it, so even if you create a new low-level API for it you won't get
acceleration. You can, however, use a multipass algorithm (as Glitz
does) and get acceleration for it through OpenGL even on marginal old
hardware. I'd guess that the latter is much more likely to satisfy app
developers than the former (and that's the point I was trying to make
above).

Off-screen drawable objects: PBuffers are offscreen drawable objects
that have existed in OpenGL since 1995 (if I remember correctly).
Extensions exist to allow using them as textures, too. We simply chose
to implement an entirely new mechanism for offscreen rendering rather
than putting our resources into implementing a spec that already
existed.

| So, my motivation for moving to GL drivers is far more about providing
| drivers for closed source hardware and reducing developer effort needed
| to support new hardware ...

I agree that these are extremely important.

| ...than it is about making the desktop graphics
| faster or more fancy.

Some people do feel otherwise on that point. :-)

| The bulk of 2D applications need to paint solid rectangles, display a
| couple of images with a bit of scaling and draw some text.

Cairo does a lot more than that, so it would seem that we expect that
situation to change (for example, as SVG gains traction).

Aside: [I know you know this, but I just want to call it out for any
reader who hasn't considered it before.] You can almost never base a
design on just the most common operations; infrequent operations matter
too, if they're sufficiently expensive. For example, in a given desktop
scene glyph drawing commands might outnumber window-decoration drawing
commands by 1000 to 1, but if drawing a decoration is 1000 times as slow
as drawing a glyph, it accounts for half the redraw time for the scene.
In an important sense the two operations are equally critical even
though one occurs 1000 times as often as the other.

| Neither of us gets to tell people what code they write...

True, but don't underestimate your influence!

| ... right now a
| developer can either spend a week or so switching an XFree86 driver to
| EXA and have decent Render-based performance or they can stand around
| and wait for 'some one else' to fix the Mesa/DRI drivers so that we can
| then port Xgl to them. ...

Also true, but the point I've often tried to make is that this situation
is the result of a lot of conscious decisions that we made in the past:
funding decisions (where funded work was involved), design decisions,
personal decisions. It doesn't have to be this way, and there are
advantages in not doing it this way again. Jon's paper that inspired
this exchange asks us to think about how we might do things better in
the future.

| Plus, it's not all bad -- we're drawing in new developers who are
| learning about how graphics chips work at a reasonably low level and who
| may become interested enough to go help with the GL drivers. And, I'm
| seeing these developers face up to some long-standing DRI issues
| surrounding memory management. EXA on DRM (the only reasonable EXA
| architecture in my mind) has all of the same memory management issues
| that DRI should be facing to provide FBO support. Having more eyes and
| brains looking at this problem can't hurt.

Well said.

| Yes, you *can*, but the amount of code needed to perform simple
| pixel-aligned upright blends is a tiny fraction of that needed to deal
| with filtering textures and *then* blending. ...

I think Brian covered this, but the short summary is you wouldn't use
texturing for this; you'd use glDrawPixels or glCopyPixels and optimize
the simple paths that are equivalent to those in Render.

| I'll consider Xgl a success if it manages to eliminate 2D drivers from
| machines capable of supporting OpenGL. Even if the bulk of applications
| continue to draw using Render and that is translated by X to OpenGL, we
| will at least have eliminated a huge duplication of effort between 2D
| and 3D driver development, provided far better acceleration than we have
| today for Render operations and made it possible for the nasty
| closed-source vendors to ship working drivers for their latest video
| cards.
|
| I see the wider availability of OpenGL APIs to be a nice side-effect at
| this point; applications won't (yet) be able to count on having decent
| OpenGL performance on every desktop, but as the number of desktops with
| OpenGL support grows, we will see more and more applications demanding
| it and getting people to make purchasing decisions based on OpenGL
| availablity for such applications.
|
| Resources for Mesa development are even more constrained than X
| development, but both of these show signs of improvement, both for
| social and political reasons within the community as well as economic
| reasons within corporations. Perhaps someday we really will have enough
| resources that watching a large number of people get side-tracked with
| the latest shiny objects won't bother us quite so much.

I can support nearly all of that, so I think it's a good note on which
to close.

You and I and Jim have talked about this stuff many times, but hopefully
recapping it has helped provide background for the folks who are new to
the discussion.

Thanks!
Allen

2005-09-01 20:18:26

by Jim Gettys

[permalink] [raw]
Subject: Re: State of Linux graphics

Not at all.

We're pursuing two courses of action right now, that are not mutually
exclusive.

Jon Smirl's argument is that we can satisfy both needs simultaneously
with a GL only strategy, and that doing two is counter productive,
primarily on available resource grounds.

My point is that I don't think the case has (yet) been made to put all
eggs into that one basket, and that some of the arguments presented for
that course of action don't hold together.

- Jim

On Thu, 2005-09-01 at 16:39 +0000, Andreas Hauser wrote:
> jg wrote @ Thu, 01 Sep 2005 11:59:33 -0400:
>
> > Legacy hardware and that being proposed/built for the developing world
> > is tougher; we have code in hand for existing chips, and the price point
> > is even well below cell phones on those devices. They don't have
> > anything beyond basic blit and, miracles of miracles, alpha blending.
> > These are built on one or two generation back fabs, again for cost.
> > And as there are no carriers subsidizing the hardware cost, the real
> > hardware cost has to be met, at very low price points. They don't come
> > with the features Allen admires in the latest cell phone chips.
>
> So you suggest, that we, that have capable cards, which can be had for
> < 50 Euro here, see that we find something better than X.org to run
> on them because X.org is concentrating on < 10 Euro chips?
> Somehow i always thought that older xfree86 trees were just fine for them.
>
> Andy

2005-09-01 20:38:55

by [email protected]

[permalink] [raw]
Subject: Re: State of Linux graphics

On 9/1/05, Jim Gettys <[email protected]> wrote:
> Not at all.
>
> We're pursuing two courses of action right now, that are not mutually
> exclusive.
>
> Jon Smirl's argument is that we can satisfy both needs simultaneously
> with a GL only strategy, and that doing two is counter productive,
> primarily on available resource grounds.
>
> My point is that I don't think the case has (yet) been made to put all
> eggs into that one basket, and that some of the arguments presented for
> that course of action don't hold together.

We're not putting all of our eggs in one basket, you keep forgetting
that we already have a server that supports all of the currently
existing hardware. The question is where do we want to put our future
eggs.

--
Jon Smirl
[email protected]

2005-09-01 21:29:31

by Sean

[permalink] [raw]
Subject: Re: State of Linux graphics

On Thu, September 1, 2005 4:38 pm, Jon Smirl said:

> We're not putting all of our eggs in one basket, you keep forgetting
> that we already have a server that supports all of the currently
> existing hardware. The question is where do we want to put our future
> eggs.

Amen! All these arguments that we can't support an advanced future
design unless the new design also supports $10 third world video cards too
is a complete red herring.

Sean