This is somewhat off-topic, so we shouldn't discuss it TOO much on-list,
but I feel it's relevant to the state of affairs with Linux.
I haven't looked at what's available on opencores.org, but one of the
biggest problems we seem to have is with getting high-quality graphics
cards that are compatible with Linux in the sense that there are open
specs and there's an open-source driver. Oh, and we'd like to have
something decent.
I have personally designed a graphics engine. Actually, I would say
that I did maybe 90% of the Verilog coding on it, and about 20% of the
back-end (place, route, etc.) work. I also did 100% of the X11 software
(DDX) and 0% of the kernel driver code. I wouldn't call it a
masterpiece of engineering compared to the latest and greatest high-end
3D and CAD graphics chips, but it's a powerful workhorse used in most of
the air traffic control graphics cards and medical imaging cards that my
employer sells (10 megapixel displays are easy for us). Were you to
read the manual on it, you'd think some of it was a bit unusual (such as
the way you issue rendering commands), because it WAS my first ASIC
ever. I did meet all of our performance goals. And I've come a long
way since then. (Unfortunately, this may sound like a plug, but I have
competing desires to be humble about what I did but also not to
publically say something that might understate the value of my
employer's products. I also feel a sense of pride in my accomplishment.)
That being said, I would LOVE to be involved in the design of an
open-source graphics chip with the Linux market primarily in mind. This
is a major sore point for us, and I, for one, would love to be involved
in solving it. With an open architecture, everyone wins. We win
because we have something stable which we can put in main-line Linux,
and chip fabs win, because anyone can sell it, and anyone can write
drivers for any platform.
Imagine ATI and nVidia competing on how they can IMPROVE the design over
one another but being obligated to release the source code. I know...
wishful thinking. But I know a variety of ways that chips and boards
could be made with respectable geometries (90nm) and high performance.
No more being at the mercy of closed-development graphics chip designers
who make Linux an after-though if they even think of us at all.
Please forgive my off-topic intrusion.
> No more being at the mercy of closed-development graphics chip designers
> who make Linux an after-though if they even think of us at all.
I don't know if we are at the mercy of closed-development code. In a way you
are always at the mercy of someone. Say there was an open driver development
push for an open GPU, someone would still have to code for it. Someone would
make decisions and someone would disagree.
We would in effect be at the mercy of those. We are at the mercy of the kernel
coders as we speak. Decisions could be made that affect you and me, right
now. Suddenly it might be decided that AC97 is useless and support will not
be continued. Unlikely but it could happen. So then what? I don't think i
could just jump in and code that. Certainly there are people that could, but
what if those patches were to be ignored? Forked?
Oh ... don't get me wrong, i think that the conceptual idea is awesome.
Personally, i wouldn't know where to begin, but can the open source community
compete with Nvidia and ATI? afterall this goes beyond software, it delves
into hardware. Sure there are people with the knowledge, maybe even with the
means, but i doubt the financial backing would be there from the get go.
But hey. I hope i'm wrong and open hardware is the next big thing. One request
though. Make the cooler quiet please :)
... One afterthought on the mercy bit. I had issues with NVidia's 5328 drivers
on 2.6 ... it was frustrating and all, but if i thought the path of least
resistance i doubt i'd be running Linux. Then again... I quit running Windows
because i couldn't take it anymore.
--
with kind regards,
Christian Unger
- < > - < > - < > - < > - < > - < > - < > - < > -
Alt. Email: [email protected]
ICQ: 204184156
Mobile: 0402 268904
Web: http://naiv.sourceforge.net
On Wed, 2004-01-28 at 18:11, Christian Unger wrote:
> Oh ... don't get me wrong, i think that the conceptual idea is awesome.
> Personally, i wouldn't know where to begin, but can the open source community
> compete with Nvidia and ATI? afterall this goes beyond software, it delves
Well I think the first problem is that the idea is currently too big. If
someone were to do this sucessfully they would make the first open cards
something like a Trident 8900C. Something small but usable for people
who need it. The next cards would add onto it, and so on and so on until
you got a base that would meet the 3D ATI/Nvidia needs. Trying to aim
for the top at the beginning is a great way to crater.
--
Stephen John Smoogen [email protected]
Los Alamos National Lab CCN-5 Sched 5/40 PH: 4-0645
Ta-03 SM-1498 MailStop B255 DP 10S Los Alamos, NM 87545
-- So shines a good deed in a weary world. = Willy Wonka --
> someone were to do this sucessfully they would make the first open cards
> something like a Trident 8900C. Something small but usable for people
> who need it. The next cards would add onto it, and so on and so on until
> you got a base that would meet the 3D ATI/Nvidia needs. Trying to aim
> for the top at the beginning is a great way to crater.
In case of manufacturing, design, etc. the idea of release small, release
early
would be too costly for it to survive, me thinks.
Unless we had software to simulate a graphics chip and its software.
Like vmware but with emulation of hardware and software on that
virtual hardware. Lots of CPU power would be required there, but
then if we could write code for a virtual hardware emulator, "writing"
such a chip and then designing it would be feasible.
This is intriguing.
Regards,
Maciej
Quote from Stephen Smoogen <[email protected]>:
> On Wed, 2004-01-28 at 18:11, Christian Unger wrote:
> > Oh ... don't get me wrong, i think that the conceptual idea is awesome.
> > Personally, i wouldn't know where to begin, but can the open source community
> > compete with Nvidia and ATI? afterall this goes beyond software, it delves
>
> Well I think the first problem is that the idea is currently too big. If
> someone were to do this sucessfully they would make the first open cards
> something like a Trident 8900C. Something small but usable for people
> who need it. The next cards would add onto it, and so on and so on until
> you got a base that would meet the 3D ATI/Nvidia needs. Trying to aim
> for the top at the beginning is a great way to crater.
A simple framebuffer connected to the parallel port would be trivial
to make, and it would be suprisingly useful for simple applications
such as word processing. Literally a handful of components soldered
on to a piece of stripboard and well written drivers for the
framebuffer console and X is all it would take. No need for anything
remotely fancy at first.
John.
Christian Unger wrote:
>>No more being at the mercy of closed-development graphics chip designers
>>who make Linux an after-though if they even think of us at all.
> Oh ... don't get me wrong, i think that the conceptual idea is awesome.
> Personally, i wouldn't know where to begin, but can the open source community
> compete with Nvidia and ATI? afterall this goes beyond software, it delves
> into hardware. Sure there are people with the knowledge, maybe even with the
> means, but i doubt the financial backing would be there from the get go.
>
We cannot compete with Nvidia or ATI or 3Dlabs or Matrox or even S3.
The real question we have to ask ourselves is, what would be the market
demand for a graphics card that is 3 generations behind the state of the
art and over-priced, the only advantage being that it's a 100% open
architecture?
I don't have $100k to have it fabricated, so we have to goad some
company into doing it for us, and given the volumes, they'll have to
charge way more than it's worth if you compare its capabilities against
ATI et al.
I've got some great ideas for how to do this chip, but they're frankly
nothing revolutionary. The obvious test bed is an FPGA. That imposes
serious limitations on what kind of logic utilization and performance we
can get. The ASIC version can be clocked faster, but we dare not put in
untested logic. (And we can't afford the tools necessary to do the
proper simulation.)
So, the big question: How many units a year would be sold for an
underpowered, over-priced graphics card that just happens to be 100%
open and 100% supported?
> The real question we have to ask ourselves is, what would be the market
> demand for a graphics card that is 3 generations behind the state of the
> art and over-priced, the only advantage being that it's a 100% open
> architecture?
Err, well there are always the server and embedded markets, if the
device was cheap enough.
> I don't have $100k to have it fabricated, so we have to goad some
> company into doing it for us, and given the volumes, they'll have to
> charge way more than it's worth if you compare its capabilities against
> ATI et al.
>
> I've got some great ideas for how to do this chip, but they're frankly
> nothing revolutionary. The obvious test bed is an FPGA. That imposes
> serious limitations on what kind of logic utilization and performance we
> can get. The ASIC version can be clocked faster, but we dare not put in
> untested logic. (And we can't afford the tools necessary to do the
> proper simulation.)
WHAT!? You are making the project out to be several orders of
magnitude more difficult and expensive than it is.
Did you know that you can generate a 625-line TV signal with little
more hardware than a Z80 CPU? Some 8-bits actually did that.
> So, the big question: How many units a year would be sold for an
> underpowered, over-priced graphics card that just happens to be 100%
> open and 100% supported?
Quite a few. Think of the TV-connected embedded appliance market, for
example. Displaying a static menu of choices isn't exactly very
demanding.
John.
On Thu, 29 Jan 2004, Timothy Miller wrote:
>
>
> Christian Unger wrote:
> >>No more being at the mercy of closed-development graphics chip designers
> >>who make Linux an after-though if they even think of us at all.
>
> > Oh ... don't get me wrong, i think that the conceptual idea is awesome.
> > Personally, i wouldn't know where to begin, but can the open source community
> > compete with Nvidia and ATI? afterall this goes beyond software, it delves
> > into hardware. Sure there are people with the knowledge, maybe even with the
> > means, but i doubt the financial backing would be there from the get go.
> >
>
> We cannot compete with Nvidia or ATI or 3Dlabs or Matrox or even S3.
>
> The real question we have to ask ourselves is, what would be the market
> demand for a graphics card that is 3 generations behind the state of the
> art and over-priced, the only advantage being that it's a 100% open
> architecture?
>
> I don't have $100k to have it fabricated, so we have to goad some
> company into doing it for us, and given the volumes, they'll have to
> charge way more than it's worth if you compare its capabilities against
> ATI et al.
>
> I've got some great ideas for how to do this chip, but they're frankly
> nothing revolutionary. The obvious test bed is an FPGA. That imposes
> serious limitations on what kind of logic utilization and performance we
> can get. The ASIC version can be clocked faster, but we dare not put in
> untested logic. (And we can't afford the tools necessary to do the
> proper simulation.)
>
>
> So, the big question: How many units a year would be sold for an
> underpowered, over-priced graphics card that just happens to be 100%
> open and 100% supported?
>
With the press Linux is getting from the IBM/Linux advertisements
for the US football games, etc., methinks it won't be long before
NVidia and all the rest go open-source, just to jump onto that
band-wagon. They just need a smart way to protect their intellectual
property.
Cheers,
Dick Johnson
Penguin : Linux version 2.4.24 on an i686 machine (797.90 BogoMips).
Note 96.31% of all statistics are fiction.
John Bradford wrote:
>>The real question we have to ask ourselves is, what would be the market
>>demand for a graphics card that is 3 generations behind the state of the
>>art and over-priced, the only advantage being that it's a 100% open
>>architecture?
>
>
> Err, well there are always the server and embedded markets, if the
> device was cheap enough.
Ah, but it won't be. Low-volume ASICs are expensive. The chip itself
would probably be around $150, not counting $100k NRE. Then you have to
pay for the board, make up for the NRE, and make some profit to make it
worth while. How much are YOU willing to pay?
>
>
>>I don't have $100k to have it fabricated, so we have to goad some
>>company into doing it for us, and given the volumes, they'll have to
>>charge way more than it's worth if you compare its capabilities against
>>ATI et al.
>>
>>I've got some great ideas for how to do this chip, but they're frankly
>>nothing revolutionary. The obvious test bed is an FPGA. That imposes
>>serious limitations on what kind of logic utilization and performance we
>>can get. The ASIC version can be clocked faster, but we dare not put in
>>untested logic. (And we can't afford the tools necessary to do the
>>proper simulation.)
>
>
> WHAT!? You are making the project out to be several orders of
> magnitude more difficult and expensive than it is.
>
> Did you know that you can generate a 625-line TV signal with little
> more hardware than a Z80 CPU? Some 8-bits actually did that.
Certainly. But when you can get perfectly good open-source drivers for
an ATI Rage 128 and the board for $15 from a Taiwanese manufacturer,
who's going to want what you're describing?
The thing you have to keep in mind is that in order for this open arch
board to get developed, someone has to be willing to invest in
fabricating it, and that means it has to be somewhat competitive and a
significant performer.
From the mouth of someone who has done a graphics ASIC and numerous
FPGA designs also in graphics and who has worked on graphics boards in
air traffic control, medical, and workstation console markets and who
has written X-server modules for Number 9 i128, Matrox G450, Permidia 2
and 3, Radeon 7500 and 9000, my own graphics chip, probably a number of
chips I've forgotten AND who has been very performance and cost
conscious the whole time: It is MORE complicated than I make it sound.
That doesn't mean it's not doable. :)
>
>
>>So, the big question: How many units a year would be sold for an
>>underpowered, over-priced graphics card that just happens to be 100%
>>open and 100% supported?
>
>
> Quite a few. Think of the TV-connected embedded appliance market, for
> example. Displaying a static menu of choices isn't exactly very
> demanding.
This sort of thing is ALREADY available with open-source drivers.
Whatever we design is going to be EXPENSIVE. So, regardless of the fact
that an ATI All-in-Wonder Radeon 9000 is over-powered for job you
describe, that board will be cheaper than what we could produce.
Because of certain invariant costs, there is a performance point below
which it is not worth it. Because of non-invariant costs, there is a
performance above which it is not worth it. There may or not be a point
where the compromize makes it worth doing.
Now, this all assumes that it's completely a hobbyist project. If we
were to design something that was, in principle, a good performer, but
we couldn't simulate, debug, and fabricate it, we MIGHT be able to
convince some companies to do that FOR us. And they might even be able
to enhance it in ways that would make it compete on performance.
But it's still going to be expensive.
Richard B. Johnson wrote:
>
>
> With the press Linux is getting from the IBM/Linux advertisements
> for the US football games, etc., methinks it won't be long before
> NVidia and all the rest go open-source, just to jump onto that
> band-wagon. They just need a smart way to protect their intellectual
> property.
>
Indeed! And this may make the whole idea of an open-arch GPU a pipe-dream.
Honestly, we don't _need_ an open-arch GPU. We just need something
whose register set is fully publically documented.
But an open-arch GPU would be NEAT, though. :)
Hmmm... If I understand this right, one of the reasons nVidia doesn't
release open source drivers is that they don't own all of the IP in
their cores. I wonder what that IP is and if the open-source community
couldn't collaborate to produce LGPL-like replacements.
Quote from Timothy Miller <[email protected]>:
>
[snip]
> > Err, well there are always the server and embedded markets, if the
> > device was cheap enough.
>
> Ah, but it won't be. Low-volume ASICs are expensive. The chip itself
> would probably be around $150, not counting $100k NRE. Then you have to
> pay for the board, make up for the NRE, and make some profit to make it
> worth while. How much are YOU willing to pay?
Yes, for real world devices there is always the point to be considered
that you can buy a $15 card, and if your requirements are simple
enough, simply ignore the bits that you don't need, and drive it with
open source code. The cost of developing a much simpler and slightly
cheaper solution outweighs the potential saving, so there is no real
incentive to develop it.
However, if the much simpler but cheaper project already existed, and
was as little as $1 cheaper to produce in volume, would embedded
manufacturers use it? I suspect they would.
> The thing you have to keep in mind is that in order for this open arch
> board to get developed, someone has to be willing to invest in
> fabricating it, and that means it has to be somewhat competitive and a
> significant performer.
Well, the cost of fabricating depends on the device. I was basically
thinking of a 68000, an EPROM and a SIMM on a piece of stripboard,
some ribbon cable and a DB-25 connector.
Maybe our goals are somewhat different :-)
John.
John Bradford wrote:
> Well, the cost of fabricating depends on the device. I was basically
> thinking of a 68000, an EPROM and a SIMM on a piece of stripboard,
> some ribbon cable and a DB-25 connector.
>
> Maybe our goals are somewhat different :-)
Very different. What you're describing is a dumb terminal.
What I'm describing is a PC console graphics card that will let someone
play Quake III at a reasonable framerate.
Isn't that what most people want?
And the performance disparity between what you're describing and what
I'm describing is enormous!
On Thu, 2004-01-29 at 08:13, Timothy Miller wrote:
> We cannot compete with Nvidia or ATI or 3Dlabs or Matrox or even S3.
>
> The real question we have to ask ourselves is, what would be the market
> demand for a graphics card that is 3 generations behind the state of the
> art and over-priced, the only advantage being that it's a 100% open
> architecture?
I think I can do better than that by buying two-generation-behind cards
off EBay.
The older Matrox, ATI, etc. cards have complete driver documentation
(AFAIK) or at least are very well supported by completely open Linux /
XFree86 drivers.
You can get a Matrox Millenium G400 on EBay right now for under $20. Or
an ATI 7500 for under $30.
So, as cool as an open-source graphics chip would be, I think if you
can't manufacture a complete AGP card with your new chip for less than
$30, and better performance than those parts... don't bother.
--
Torrey Hoffman <[email protected]>
On Thu, Jan 29, 2004 at 11:58:18AM -0500, Timothy Miller wrote:
>
> But an open-arch GPU would be NEAT, though. :)
Some people agree:
http://www.opencores.org/projects/manticore/
http://www.opencores.org/projects/vga_lcd/
Frank
--
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it." - Brian W. Kernighan
> > Well, the cost of fabricating depends on the device. I was basically
> > thinking of a 68000, an EPROM and a SIMM on a piece of stripboard,
> > some ribbon cable and a DB-25 connector.
> >
> > Maybe our goals are somewhat different :-)
>
> Very different. What you're describing is a dumb terminal.
Hardly. It's nothing like a dumb terminal whatsoever.
It's a simple framebuffer, possibly with line drawing, and box filling
capabilities. Nevertheless, it could be used as a general purpose X
display, for spreadsheets, simple to moderate wordprocessing,
(I.E. probably not DTP-like applications), status displays for various
systems, etc.
So, it does have real world uses.
> What I'm describing is a PC console graphics card that will let someone
> play Quake III at a reasonable framerate.
>
> Isn't that what most people want?
In the embedded and server markets, I don't see it being a major
requirement, actually.
Just because a standard graphics card is going to do all they want and
be cheaper to develop, doesn't make it a requirement.
> And the performance disparity between what you're describing and what
> I'm describing is enormous!
Your arguments seem to be based on the fact that fabricating an ASIC
is out of the budget of most individuals, and that no large company
would want to develop open source graphics hardware when they can buy
$15 graphics cards. That argument is perfectly valid, but it's
incomplete.
What _is_ within the budget of most interested individuals are things
like general purpose CPUs, generic video sync generation ICs, SIMMs.
The parallel port remains far easier to interface to than the PCI bus,
and can easily provide enough bandwidth for experimenting with simple
640x480 framebuffer graphics type applications.
So, we can either do something interesting with the above, or sit
around discussing how expensive it is to make a graphics card.
At least it provides a way for us to create the first generation of
open graphics hardware cheaply, and experiment with various ideas.
Besides, this is just the first stage - once we have the graphics
card, we can move on to other things like the 9-track tape drive
discussed on LKML a while ago:
http://marc.theaimsgroup.com/?l=linux-kernel&m=105128749415083&w=2
John.
Torrey Hoffman wrote:
>
> I think I can do better than that by buying two-generation-behind cards
> off EBay.
>
[snip]
Agreed. But would those not eventually run out? Certainly, there will
always be a supply of used cards that are 2-generations behind, but
eventually, we may get to the point where all the used cards have no
public documentation.
But perhaps at that point, Linux will dominate and the manufacturers
will feel pressured to open their register sets.
John Bradford wrote:
>>>Well, the cost of fabricating depends on the device. I was basically
>>>thinking of a 68000, an EPROM and a SIMM on a piece of stripboard,
>>>some ribbon cable and a DB-25 connector.
>>>
>>>Maybe our goals are somewhat different :-)
>>
>>Very different. What you're describing is a dumb terminal.
>
>
> Hardly. It's nothing like a dumb terminal whatsoever.
>
> It's a simple framebuffer, possibly with line drawing, and box filling
> capabilities. Nevertheless, it could be used as a general purpose X
> display, for spreadsheets, simple to moderate wordprocessing,
> (I.E. probably not DTP-like applications), status displays for various
> systems, etc.
>
> So, it does have real world uses.
But wouldn't it be painfully slow?
>
>
>>What I'm describing is a PC console graphics card that will let someone
>>play Quake III at a reasonable framerate.
>>
>>Isn't that what most people want?
>
>
> In the embedded and server markets, I don't see it being a major
> requirement, actually.
>
> Just because a standard graphics card is going to do all they want and
> be cheaper to develop, doesn't make it a requirement.
Have you ever used a graphics card in VESA mode? Dragging a window
around the screen and watching it repaint can be a very unenjoyable
thing to watch. From what you've described, this is the sort of thing
you'd get.
>
>
>>And the performance disparity between what you're describing and what
>>I'm describing is enormous!
>
>
> Your arguments seem to be based on the fact that fabricating an ASIC
> is out of the budget of most individuals, and that no large company
> would want to develop open source graphics hardware when they can buy
> $15 graphics cards. That argument is perfectly valid, but it's
> incomplete.
>
> What _is_ within the budget of most interested individuals are things
> like general purpose CPUs, generic video sync generation ICs, SIMMs.
> The parallel port remains far easier to interface to than the PCI bus,
> and can easily provide enough bandwidth for experimenting with simple
> 640x480 framebuffer graphics type applications.
Interfacing with the PCI bus is easy enough in an FPGA. If all you want
is a dumb framebuffer, you can fit that logic into a very small,
inexpensive Xilinx part. All you need is a DAC and some memory chips,
and you're set.
But even PCI can be very slow, particularly for image loads.
>
> So, we can either do something interesting with the above, or sit
> around discussing how expensive it is to make a graphics card.
>
> At least it provides a way for us to create the first generation of
> open graphics hardware cheaply, and experiment with various ideas.
>
> Besides, this is just the first stage - once we have the graphics
> card, we can move on to other things like the 9-track tape drive
> discussed on LKML a while ago:
Ok, so, how about this idea:
- Small Xilinx FPGA, 16M of RAM, and a DAC on a board.
- AGP 2X
- Up to 2048x2048 resolution at 8, 16, and 32 bpp.
- Acceleration ONLY for solid fills and bitblts on-screen.
Given that so little is accelerated, there is no point in putting more
than the viewable framebuffer on the card, hense the 16 megs. It would
probably actually HURT performance to cache pixmaps on the card.
Oh, there's one thing I forgot. It would have to support VGA. There is
a VGA core on opencores.org that we could use, but its logic area would
probably push up the FPGA cost so that the board was in the $100 range.
Probably more.
<sigh>
> > It's a simple framebuffer, possibly with line drawing, and box filling
> > capabilities. Nevertheless, it could be used as a general purpose X
> > display, for spreadsheets, simple to moderate wordprocessing,
> > (I.E. probably not DTP-like applications), status displays for various
> > systems, etc.
> >
> > So, it does have real world uses.
>
> But wouldn't it be painfully slow?
[snip]
> Have you ever used a graphics card in VESA mode? Dragging a window
> around the screen and watching it repaint can be a very unenjoyable
> thing to watch. From what you've described, this is the sort of thing
> you'd get.
OK, maybe it would be too slow for practical desktop use.
> > So, we can either do something interesting with the above, or sit
> > around discussing how expensive it is to make a graphics card.
> >
> > At least it provides a way for us to create the first generation of
> > open graphics hardware cheaply, and experiment with various ideas.
> >
> > Besides, this is just the first stage - once we have the graphics
> > card, we can move on to other things like the 9-track tape drive
> > discussed on LKML a while ago:
>
>
> Ok, so, how about this idea:
>
> - Small Xilinx FPGA, 16M of RAM, and a DAC on a board.
> - AGP 2X
> - Up to 2048x2048 resolution at 8, 16, and 32 bpp.
> - Acceleration ONLY for solid fills and bitblts on-screen.
>
> Given that so little is accelerated, there is no point in putting more
> than the viewable framebuffer on the card, hense the 16 megs. It would
> probably actually HURT performance to cache pixmaps on the card.
If we put 4 or more on each board, it could be useful for betting
shops, stock markets, shop window displays, and other applications
where you need to control a dozen or more screens, which basically
contain textual information, but where 80x25 text mode just isn't
enough. I.E. you might want the odd pie chart or different sized text
or something.
> Oh, there's one thing I forgot. It would have to support VGA.
Maybe not, the primary market for this, (I.E. what makes it cost
effective to produce, and therefore available for developers to use as
their primary display), could be users who want to control many
displays, and who would have a standard VGA card for the primary
monitor. (Yeah, it would be kind of ironic if 99% of our amasing new
graphics cards ended up in mahines with another card as the primary
display, but then again, if it makes the open hardware available for
developers to experiment with at a reasonable cost, it would be worth
doing).
So, what about a PCI card with four or eight 16MB framebuffers, and
the basic acceleration and other specs you described above. Is that
at least slightly feasible, do you think?
John.
John Bradford wrote:
>
> If we put 4 or more on each board, it could be useful for betting
> shops, stock markets, shop window displays, and other applications
> where you need to control a dozen or more screens, which basically
> contain textual information, but where 80x25 text mode just isn't
> enough. I.E. you might want the odd pie chart or different sized text
> or something.
The market for secondary heads is too small. You can get an ATI Mach 64
PCI card for pennies and add it as a second head for what you're describing.
For an open-source graphics card to be marketable, it would have to be
attractive as a primary head used in Linux workstations and servers, and
it would have to be so in a PC market.
>
>
>>Oh, there's one thing I forgot. It would have to support VGA.
>
>
> Maybe not, the primary market for this, (I.E. what makes it cost
> effective to produce, and therefore available for developers to use as
> their primary display), could be users who want to control many
> displays, and who would have a standard VGA card for the primary
> monitor. (Yeah, it would be kind of ironic if 99% of our amasing new
> graphics cards ended up in mahines with another card as the primary
> display, but then again, if it makes the open hardware available for
> developers to experiment with at a reasonable cost, it would be worth
> doing).
The irony is too much. Seriously.
>
> So, what about a PCI card with four or eight 16MB framebuffers, and
> the basic acceleration and other specs you described above. Is that
> at least slightly feasible, do you think?
Adding extra heads is relatively easy, and you can keep the memory
unified and do it all in one chip.
Timothy Miller wrote:
>
>
> John Bradford wrote:
>
[...]
>>> What I'm describing is a PC console graphics card that will let
>>> someone play Quake III at a reasonable framerate.
>>>
>>> Isn't that what most people want?
>>
>>
>>
>> In the embedded and server markets, I don't see it being a major
>> requirement, actually.
>>
>> Just because a standard graphics card is going to do all they want and
>> be cheaper to develop, doesn't make it a requirement.
>
>
> Have you ever used a graphics card in VESA mode? Dragging a window
> around the screen and watching it repaint can be a very unenjoyable
> thing to watch. From what you've described, this is the sort of thing
> you'd get.
>
I run X on an unaccelerated framebuffer (1280x1024 16bit color) every day.
I don't even _notice_ a difference from accelerated X for a number of uses,
such as word processing, watching movies with mplayer, web browsing and programming.
Dragging a window around is fine!
Simple opengl games like "frozen bubble" with software rendering are fine too,
on a 333MHz dual celeron.
The only stuff that don't work well is 3D-intensive stuff like quake and tuxracer.
(The unaccelerated xserver is running on the second head of a matrox G550. The
primary head uses acceleration, but is often in use by another user.)
So a good 2D card is trivial - a video signal generator and memory on an AGP bus.
Let the host processor do software rendering. Cheap, and I believe this is
the sort of thing embedded uses might go for when they want to display mostly
static stuff. (Web-based info kiosk and similiar).
Add a BIOS rom and you can even see what happens during boot on a pc.
The next step up is 2D acceleration, which is easy enough by sticking a
generic microprocessor there. Maybe an inexpensive celeron/duron.
Then there's 3D, and enough of it to play quake. The first quakes ran
fine with software rendering and processors that were slow by today's
standards. Todays cheap processors are faster - I wonder if putting 2-4 of
them on the card might be enough. They'd be able to access the memory directly,
not limited to slow AGP/PCI speeds. And they'd be able to divide the work
between them, rendering separate parts of the screen.
[...]
> - Small Xilinx FPGA, 16M of RAM, and a DAC on a board.
> - AGP 2X
> - Up to 2048x2048 resolution at 8, 16, and 32 bpp.
Why bother with 8-bit?
> - Acceleration ONLY for solid fills and bitblts on-screen.
>
> Given that so little is accelerated, there is no point in putting more
> than the viewable framebuffer on the card, hense the 16 megs. It would
> probably actually HURT performance to cache pixmaps on the card.
>
>
> Oh, there's one thing I forgot. It would have to support VGA. There is
Why VGA? When you have a _driver_ , you don't need compatibility at all.
(Just like soundcards - they don't need soundblaster compatibility for anything)
The pc don't need vga - it can boot using the card's bios
Linux don't need vga - it will use the provided driver.
Apps don't need vga, they don't do that sort of thing anyway. They
use the tty/X11/SDL/opengl.
> a VGA core on opencores.org that we could use, but its logic area would
> probably push up the FPGA cost so that the board was in the $100 range.
> Probably more.
Another reason to drop VGA then - money.
Helge Hafting
On Thursday 29 January 2004 12:55, John Bradford wrote:
> > > Well, the cost of fabricating depends on the device. I was basically
> > > thinking of a 68000, an EPROM and a SIMM on a piece of stripboard,
> > > some ribbon cable and a DB-25 connector.
> > >
> > > Maybe our goals are somewhat different :-)
> >
> > Very different. What you're describing is a dumb terminal.
>
> Hardly. It's nothing like a dumb terminal whatsoever.
>
> It's a simple framebuffer, possibly with line drawing, and box filling
> capabilities. Nevertheless, it could be used as a general purpose X
> display, for spreadsheets, simple to moderate wordprocessing,
> (I.E. probably not DTP-like applications), status displays for various
> systems, etc.
>
> So, it does have real world uses.
Yes - but you want it:
1. to use the AGP to gain access to multiple offscreen pages
2. a DMA controler to copy the data
3. A simple emulation (either 8bit cpu based or better) of VGA/SVGA
4. room in the design for future processors.
Really - future processors:
1. including multiple vector multiply processors
2. general purpose CPU for control
3. LOTS of memory.
What you REALLY need to do (long term) is to move the entire X server
into a graphics board (including Mesa/OpenGL/... but minus the network
code, authentication, and resource database...).
It is my understanding that a LOT of the effort at speed is lost by using
a single threaded process to handle the graphics. With a multiple cpu (not
necessarily SMP mind you) performing the graphics transformations, you have
a single rendering output step (another case for multiple cpus - 1 cpu: entire
pixel render, 2: each takes 1/2 display, 4 - 1/4 display...). And with
multiple dual ported graphics memory (port to pixel rendering cpu, port to
frame buffer) you end up wit a very fast graphics display.
Limiting factor: it may be bigger than a single slot.
It would likely resemble the old SGI type of rendering engine, which used
multiple boards, multiple staging memory, and multiported display.
BUT: it would be modular. Pay a little and you only get a frame buffer.
Add a general CPU - you get a basic X server (with slow 3D, but likely
faster than currently done by the host processor)-- and pay more.
Add a geometry engine (ie a processor/memory for Mesa) you get faster 3D
operations...
Add multiple engines (each takes part of the display) you get speed...
It would likely require one AGP, but two PCI slots; and like the SGI
engines, an internal connection between the two boards.
This would also allow the project to have price levels - a $20 AGP frame
buffer wouldn't be bad at all (and not all that slow either...)
Add $40 for a general CPU... with the benifit of offloading the major X
functions... and still have the ability to use the AGP. (BTY - the AGP
is bi-directional... you should be able to copy images from the framebuffer)
Add $40 (might have to trade in the existing general CPU... so it could
actually be ~$80) and you should get options for multiple geometry
processors... at $10/20 each?
Note - the costs shown for the last upgrade is very likely wrong.
> > What I'm describing is a PC console graphics card that will let someone
> > play Quake III at a reasonable framerate.
> >
> > Isn't that what most people want?
Something like the above should do.
>
[snip]
NOTE: My guilt meter for being so far off topic is starting to peg.
Let's either find a better open forum to discuss this or just take it
off list.
The only reason I haven't taken it completely off list so far is because
SOME aspects of this discussion may be relevant to Free Software in
general. It is the Linux mentality that spurred this idea and Linux
users who would be the target market. But now we're just arguing
trivialities.
So, the obligatory on-topic comment is simple: Any Linux-targeted
graphics chip has to be quite sophistocated and cost-competitive to some
extent in order for the idea to have any merit whatsoever. Core Linux
users on this list are the most qualified to dictate what they would use
it for.
Unfortunately, the comments I have gotten so far lead me to solutions
that already exist.
Helge Hafting wrote:
>>
> I run X on an unaccelerated framebuffer (1280x1024 16bit color) every day.
This is you. How many other people would be happy with this?
>
> So a good 2D card is trivial - a video signal generator and memory on an
> AGP bus.
I wouldn't call it 'good', but you're essentially correct. Mind you,
you can get the latest S3 chip for $30 or less. That's well documented,
a lot faster than a dumb framebuffer, and has VGA built in.
Once you get below a certain level of performance in this open-source
design, it's no longer worth doing because there are so many low-end
alternatives that blow it away.
> Let the host processor do software rendering. Cheap, and I believe this is
> the sort of thing embedded uses might go for when they want to display
> mostly
> static stuff. (Web-based info kiosk and similiar).
Not cheap. You can get cheaper with what's already out there!
Let me put it this way: To make it cheaper than what's out there,
someone would have to do a run of _millions_ of some minimalist chip
that sold for like $1. You could have a graphics card for $5. THAT
would make it worth it. But we would never get the volumes necessary
for that!
> Add a BIOS rom and you can even see what happens during boot on a pc.
A BIOS ROM doesn't help you if you don't have a VGA core, and a VGA core
is not a trivial piece of logic.
I like Macs, Suns, and other UNIX workstations because they don't rely
on this antiquated piece of logic to act as a console. The chip I
designed doesn't do VGA, but that doesn't stop it from working nicely as
a console in a Sun.
>
> The next step up is 2D acceleration, which is easy enough by sticking a
> generic microprocessor there. Maybe an inexpensive celeron/duron.
<sigh> Think about the logic area required for that. For BASIC 2D
acceleration, the amount of logic required for elementary operations is
miniscule compared to the logic required for even the simplest of CPU
cores. And the dedicated logic would be faster!
>
> Then there's 3D, and enough of it to play quake. The first quakes ran
> fine with software rendering and processors that were slow by today's
> standards. Todays cheap processors are faster - I wonder if putting 2-4
> of them on the card might be enough. They'd be able to access the memory
> directly,
> not limited to slow AGP/PCI speeds. And they'd be able to divide the work
> between them, rendering separate parts of the screen.
My knowledge of 3D graphics is limited to linear algebra and mostly pure
theory, but I do have SOME clue as to what existing 3D engines are like.
The issue of general purpose CPU's versus GPU's has been discussed
on the web at nauseum, and for what they do, modern GPU's are an order
of magnitude faster than CPU's at what they do. And they're less expensive!
>
> Why bother with 8-bit?
There are reasons. We can go into them off-list.
>
>
> Why VGA? When you have a _driver_ , you don't need compatibility at all.
BOOT CONSOLE. You cannot get a boot console on a PC without a VGA core.
Once the kernel takes over, you're right, but until then...
>
>
> Another reason to drop VGA then - money.
As soon as PC BIOS's don't require it, we can drop it.
On Fri, 30 Jan 2004, Timothy Miller wrote:
> > Another reason to drop VGA then - money.
>
> As soon as PC BIOS's don't require it, we can drop it.
No PC BIOS recognizes a VGA. The PC/AT firmware uses int 0x10 to
communicate with the console and as long as there is a handler there,
console output works. Most systems will actually run without a handler,
too, but they'll usually complain to the speaker. The handler is provided
by the ROM firmware of the primary graphics adapter.
Old PC/AT firmware actually did recognize a few display adapters, namely
the CGA and the MDA which had no own firmware. These days support for
these option is often absent, even though the setup program may provide an
option to select between CGA40/CGA80/MDA/none (the latter being equivalent
to an option such as an EGA or a VGA, providing its own firmware).
--
+ Maciej W. Rozycki, Technical University of Gdansk, Poland +
+--------------------------------------------------------------+
+ e-mail: [email protected], PGP key available +
Timothy Miller <[email protected]> writes:
>> Why VGA? When you have a _driver_ , you don't need compatibility at
>> all.
>
> BOOT CONSOLE. You cannot get a boot console on a PC without a VGA
> core. Once the kernel takes over, you're right, but until then...
Why PC?
>> Another reason to drop VGA then - money.
>
> As soon as PC BIOS's don't require it, we can drop it.
Linuxbios?
--
M?ns Rullg?rd
[email protected]
M?ns Rullg?rd wrote:
>
> Why PC?
Uh, the most common platform on which Linux is run?
Important point: unless the sales volumes of this card exceed a certain
point, the project is pointless.
The volumes in, say, the Sun market are so low that you'd can't get
decent a console graphics card for under $300.
But the Sun/Solaris market doesn't care about open source. So why bother?
>
>
>>>Another reason to drop VGA then - money.
>>
>>As soon as PC BIOS's don't require it, we can drop it.
>
>
> Linuxbios?
>
Good point. As soon as every PC is running it, then that'll be helpful.
Maciej W. Rozycki wrote:
> On Fri, 30 Jan 2004, Timothy Miller wrote:
>
>
>>>Another reason to drop VGA then - money.
>>
>>As soon as PC BIOS's don't require it, we can drop it.
>
>
> No PC BIOS recognizes a VGA. The PC/AT firmware uses int 0x10 to
> communicate with the console and as long as there is a handler there,
> console output works. Most systems will actually run without a handler,
> too, but they'll usually complain to the speaker. The handler is provided
> by the ROM firmware of the primary graphics adapter.
>
> Old PC/AT firmware actually did recognize a few display adapters, namely
> the CGA and the MDA which had no own firmware. These days support for
> these option is often absent, even though the setup program may provide an
> option to select between CGA40/CGA80/MDA/none (the latter being equivalent
> to an option such as an EGA or a VGA, providing its own firmware).
>
You're not entirely correct here. I attempted to write a VGA BIOS for a
card which did not have hardware support for 80x25 text.
I first tried intercepting int 0x10. I quickly discovered that most DOS
programs bypass int 0x10 and write directly to the display memory. As a
result, very little of what should have displayed actually did.
Next, I tried hanging off this timer interrupt. I had two copies of the
text display, "now" and "what it was before". I would compare the
characters and render any differences. This worked quite well for DOS,
but the instant ANY OS switched to protected mode, they took over the
interrupt and all console messages stopped. Actually, the same was true
for int 0x10.
Even just the DOS shell command-line tends to bypass int 0x10 and write
directly to display memory.
Furthermore, 640x480x16 simply won't happen at all without direct
hardware support. Some things rely on that (or mode X or whatever) for
initial splash screens.
In the PC world, too many assumptions are made about the hardware for
any kind of software emulation to work.
The suggestion that a general-purpose CPU on the graphics card could be
used to emulate it is correct, but the logic area of the general-purpose
CPU is greater than that of the dedicated VGA hardware. Furthermore,
you can't just "stick a Z80 onto the board", because multi-chip
solutions up the board cost too much.
On Fri, 30 Jan 2004, Timothy Miller wrote:
> You're not entirely correct here. I attempted to write a VGA BIOS for a
> card which did not have hardware support for 80x25 text.
>
> I first tried intercepting int 0x10. I quickly discovered that most DOS
> programs bypass int 0x10 and write directly to the display memory. As a
> result, very little of what should have displayed actually did.
Of course, but DOS is not BIOS and the assumption is we want to use the
adapter as a boot console and with Linux. The former is handled with
appropriate firmware and the latter with a driver.
Actually I had an opportunity to use a few PC/AT headless systems (no
video adapter at all, although one could be placed in a PCI slot) with an
option called "serial console redirection" in the firmware. Their BIOS
setup program proved to work just fine over a serial line (unfortunately a
VT100 terminal was assumed, so I had to type e.g. ^[OP to "press" <F1>,
but it worked) as well any console output, including LILO (which had to be
taught to use the regular console instead of accessing the serial port for
I/O directly).
--
+ Maciej W. Rozycki, Technical University of Gdansk, Poland +
+--------------------------------------------------------------+
+ e-mail: [email protected], PGP key available +
Maciej W. Rozycki wrote:
>
> Of course, but DOS is not BIOS and the assumption is we want to use the
> adapter as a boot console and with Linux. The former is handled with
> appropriate firmware and the latter with a driver.
>
Perhaps someone can tell us what the Linux kernel does before the
console driver gets loaded.
If the console driver is a module, then all kernel init messages that
appear before the module is loaded have nowhere to go.
> A BIOS ROM doesn't help you if you don't have a VGA core, and a VGA core
> is not a trivial piece of logic.
We don't need a VGA core.
The primary markets for this card are going to be developers who don't
care about using a standard VGA card for pre-kernel-loaded stuff, the
embedded and server markets, where they can simply write a new system
BIOS that emulates 80x25 text mode on the framebuffer, and the
multi-head market, where the kernel or X will be responsible for the
extra heads anyway.
Once the kernel or X has taken over the framebuffer, there is
certainly no need for a VGA core.
As far as I am concerned, the first version of this card is going to
be more or less an expensive proof-of-concept thing. It _will_ cost
more than a brand new off-the-shelf VGA card, and it _will_ cost more
than a second-hand VGA card with documented registers. That doesn't
mean it isn't worth doing, though.
John.
On Fri, 30 Jan 2004, Timothy Miller wrote:
> > Of course, but DOS is not BIOS and the assumption is we want to use the
> > adapter as a boot console and with Linux. The former is handled with
> > appropriate firmware and the latter with a driver.
>
> Perhaps someone can tell us what the Linux kernel does before the
> console driver gets loaded.
The kernel does lot of activities, but if you mean console output, then
it doesn't start before the console driver is initialized, unless a
so-called initial console with a suitable driver is present, which may be
firmware-driven (so the driver may be a trivial redirector to appropriate
firmware callbacks).
> If the console driver is a module, then all kernel init messages that
> appear before the module is loaded have nowhere to go.
If there's no better console available, e.g. because there's no suitable
hardware present in the system or no drivers have been loaded, then the
dummy console is used -> drivers/video/console/dummycon.c.
And if you worry of the messages being lost, then you can always retrieve
them from the kernel log buffer -- use `dmesg' for example. ;-)
--
+ Maciej W. Rozycki, Technical University of Gdansk, Poland +
+--------------------------------------------------------------+
+ e-mail: [email protected], PGP key available +
On Fri, Jan 30, 2004 at 12:40:38PM -0500, Timothy Miller wrote:
>
>
> Maciej W. Rozycki wrote:
> >On Fri, 30 Jan 2004, Timothy Miller wrote:
> >
> >
> >>>Another reason to drop VGA then - money.
> >>
> >>As soon as PC BIOS's don't require it, we can drop it.
> >
PC bioses don't need VGA and never did!
They use the int 0x10 handler provided by the graphichs card bios.
When you make the card - you get to write that bios. No problem!
I once used a dec rainbow - a pc with an ibm-incompatible screen.
The display memory was organized as a linked list of lines instead
of an array of characters. It came with its own special version
of msdos 2.11. Few ordinary dos programs would run on it,
beause most tried to access the "standard" 80x25 array instead
of using msdos for io. Those who did the right thing worked, though.
(An odd machine in other ways too - it had a z80 controlling the
floppies and a 8088 controlling the screen and harddisk.
Early 80's asymmetric multiprocessor :-)
> >
> > No PC BIOS recognizes a VGA. The PC/AT firmware uses int 0x10 to
> >communicate with the console and as long as there is a handler there,
> >console output works. Most systems will actually run without a handler,
> >too, but they'll usually complain to the speaker. The handler is provided
> >by the ROM firmware of the primary graphics adapter.
> >
> > Old PC/AT firmware actually did recognize a few display adapters, namely
> >the CGA and the MDA which had no own firmware. These days support for
> >these option is often absent, even though the setup program may provide an
> >option to select between CGA40/CGA80/MDA/none (the latter being equivalent
> >to an option such as an EGA or a VGA, providing its own firmware).
> >
>
> You're not entirely correct here. I attempted to write a VGA BIOS for a
> card which did not have hardware support for 80x25 text.
>
> I first tried intercepting int 0x10. I quickly discovered that most DOS
> programs bypass int 0x10 and write directly to the display memory. As a
> result, very little of what should have displayed actually did.
>
Sure, but we're not interested in "most dos programs", are we?
The pc bios bootup will work, it uses int 0x10.
lilo output will work.
linux kernel console output will work
X will work, either with the generic framebuffer driver, or with
a proper driver written for the open hardware.
> Next, I tried hanging off this timer interrupt. I had two copies of the
> text display, "now" and "what it was before". I would compare the
> characters and render any differences. This worked quite well for DOS,
> but the instant ANY OS switched to protected mode, they took over the
> interrupt and all console messages stopped. Actually, the same was true
> for int 0x10.
>
If you want DOS application compatibility or windows compatibility
then you might need VGA. But you started out talking about
open hardware for linux - and then you really don't need vga at all.
Not even an initial 80x25 character array. A kernel without vga
support (but some other console like fbcon) works fine.
> Even just the DOS shell command-line tends to bypass int 0x10 and write
> directly to display memory.
>
Depends on what version of dos, but you can always get freedos for which
source code is available - if dos matters to you. It is something
I only ever use for flashing bios upgrades.
> Furthermore, 640x480x16 simply won't happen at all without direct
> hardware support. Some things rely on that (or mode X or whatever) for
> initial splash screens.
>
Not in linux. Of course you can reserve the legacy vga memory region
and just live with the loss of splash screens in dos.
> In the PC world, too many assumptions are made about the hardware for
> any kind of software emulation to work.
>
Not in the pc world. The pc is only hardware.
The problem is the microsoft os world, but supporting that _isn't
necessary_ when you don't plan on high volumes. I guess you
could get windows going - it uses proper display drivers these days
even if the installer doesn't. Install with vga card, swap driver,
shutdown, swap cards, power-on or some such.
> The suggestion that a general-purpose CPU on the graphics card could be
> used to emulate it is correct, but the logic area of the general-purpose
> CPU is greater than that of the dedicated VGA hardware. Furthermore,
> you can't just "stick a Z80 onto the board", because multi-chip
> solutions up the board cost too much.
Thanks for the information, seems I don't know enough about board
manufacturing.
Helge Hafting
On Fri, Jan 30, 2004 at 12:02:06PM -0500, Timothy Miller wrote:
>
> Helge Hafting wrote:
>
> >>
> >I run X on an unaccelerated framebuffer (1280x1024 16bit color) every day.
>
> This is you. How many other people would be happy with this?
>
Depends on what they do, of course. It _is_ fine for 2D office work,
that surprised me quite a bit. I can even use mplayer and watch
videos on this.
Someone interested in quake will of course not be satisfied though.
>
> >
> >So a good 2D card is trivial - a video signal generator and memory on an
> >AGP bus.
>
> I wouldn't call it 'good', but you're essentially correct. Mind you,
> you can get the latest S3 chip for $30 or less. That's well documented,
> a lot faster than a dumb framebuffer, and has VGA built in.
>
> Once you get below a certain level of performance in this open-source
> design, it's no longer worth doing because there are so many low-end
> alternatives that blow it away.
>
Good point. I had the impression you weren't going for the
best performance anyway.
> A BIOS ROM doesn't help you if you don't have a VGA core, and a VGA core
> is not a trivial piece of logic.
>
> I like Macs, Suns, and other UNIX workstations because they don't rely
> on this antiquated piece of logic to act as a console. The chip I
> designed doesn't do VGA, but that doesn't stop it from working nicely as
> a console in a Sun.
>
The pc doesn't need VGA anymore than a Sun does. A pc without any vga
runs the same stuff as the sun - various unixes and no legacy dos stuff.
Don't confuse "pc" with "msdos/windows". If the sun's software
selection satisfy you, then you're fine on a pc/x86 without vga (or other
legacy hw) too.
[...]
>
> My knowledge of 3D graphics is limited to linear algebra and mostly pure
> theory, but I do have SOME clue as to what existing 3D engines are like.
> The issue of general purpose CPU's versus GPU's has been discussed
> on the web at nauseum, and for what they do, modern GPU's are an order
> of magnitude faster than CPU's at what they do. And they're less expensive!
>
Seems the solution is to get one of those general purpose GPU's
instead, and build the card around that. Not entirely open hw,
but _fully documented_ with no hidden surprises.
Helge Hafting
Alright then, how about this: Assuming opencores has a PCI interface
and a DDR memory controller, I could write a CRT controller. We can put
that into an FPGA and see what happens.
Helge Hafting wrote:
> On Fri, Jan 30, 2004 at 12:40:38PM -0500, Timothy Miller wrote:
>
>>
>>Maciej W. Rozycki wrote:
>>
>>>On Fri, 30 Jan 2004, Timothy Miller wrote:
>>>
>>>
>>>
>>>>>Another reason to drop VGA then - money.
>>>>
>>>>As soon as PC BIOS's don't require it, we can drop it.
>>>
> PC bioses don't need VGA and never did!
> They use the int 0x10 handler provided by the graphichs card bios.
> When you make the card - you get to write that bios. No problem!
>
> I once used a dec rainbow - a pc with an ibm-incompatible screen.
> The display memory was organized as a linked list of lines instead
> of an array of characters. It came with its own special version
> of msdos 2.11. Few ordinary dos programs would run on it,
> beause most tried to access the "standard" 80x25 array instead
> of using msdos for io. Those who did the right thing worked, though.
>
> (An odd machine in other ways too - it had a z80 controlling the
> floppies and a 8088 controlling the screen and harddisk.
> Early 80's asymmetric multiprocessor :-)
>
>
>
>>>No PC BIOS recognizes a VGA. The PC/AT firmware uses int 0x10 to
>>>communicate with the console and as long as there is a handler there,
>>>console output works. Most systems will actually run without a handler,
>>>too, but they'll usually complain to the speaker. The handler is provided
>>>by the ROM firmware of the primary graphics adapter.
>>>
>>>Old PC/AT firmware actually did recognize a few display adapters, namely
>>>the CGA and the MDA which had no own firmware. These days support for
>>>these option is often absent, even though the setup program may provide an
>>>option to select between CGA40/CGA80/MDA/none (the latter being equivalent
>>>to an option such as an EGA or a VGA, providing its own firmware).
>>>
>>
>>You're not entirely correct here. I attempted to write a VGA BIOS for a
>>card which did not have hardware support for 80x25 text.
>>
>>I first tried intercepting int 0x10. I quickly discovered that most DOS
>>programs bypass int 0x10 and write directly to the display memory. As a
>>result, very little of what should have displayed actually did.
>>
>
> Sure, but we're not interested in "most dos programs", are we?
> The pc bios bootup will work, it uses int 0x10.
> lilo output will work.
> linux kernel console output will work
> X will work, either with the generic framebuffer driver, or with
> a proper driver written for the open hardware.
>
>
>>Next, I tried hanging off this timer interrupt. I had two copies of the
>>text display, "now" and "what it was before". I would compare the
>>characters and render any differences. This worked quite well for DOS,
>>but the instant ANY OS switched to protected mode, they took over the
>>interrupt and all console messages stopped. Actually, the same was true
>>for int 0x10.
>>
>
> If you want DOS application compatibility or windows compatibility
> then you might need VGA. But you started out talking about
> open hardware for linux - and then you really don't need vga at all.
> Not even an initial 80x25 character array. A kernel without vga
> support (but some other console like fbcon) works fine.
>
>
>>Even just the DOS shell command-line tends to bypass int 0x10 and write
>>directly to display memory.
>>
>
> Depends on what version of dos, but you can always get freedos for which
> source code is available - if dos matters to you. It is something
> I only ever use for flashing bios upgrades.
>
>
>>Furthermore, 640x480x16 simply won't happen at all without direct
>>hardware support. Some things rely on that (or mode X or whatever) for
>>initial splash screens.
>>
>
> Not in linux. Of course you can reserve the legacy vga memory region
> and just live with the loss of splash screens in dos.
>
>
>>In the PC world, too many assumptions are made about the hardware for
>>any kind of software emulation to work.
>>
>
> Not in the pc world. The pc is only hardware.
> The problem is the microsoft os world, but supporting that _isn't
> necessary_ when you don't plan on high volumes. I guess you
> could get windows going - it uses proper display drivers these days
> even if the installer doesn't. Install with vga card, swap driver,
> shutdown, swap cards, power-on or some such.
>
>
>>The suggestion that a general-purpose CPU on the graphics card could be
>>used to emulate it is correct, but the logic area of the general-purpose
>>CPU is greater than that of the dedicated VGA hardware. Furthermore,
>>you can't just "stick a Z80 onto the board", because multi-chip
>>solutions up the board cost too much.
>
>
> Thanks for the information, seems I don't know enough about board
> manufacturing.
>
>
> Helge Hafting
>
>
Frank Gevaerts <[email protected]> writes:
> Some people agree:
>
> http://www.opencores.org/projects/manticore/
> http://www.opencores.org/projects/vga_lcd/
Yes, I was just about to mention manticore;). May I then
mention the gpl cpu called leon; all the blueprints of the cpu
is released under the gpl. Please look at esa's page (european
space agency). Leon is a ieee 1754 conformant cpu which means
it's compatible with sparc.
--
b0ef
Quote from Timothy Miller <[email protected]>:
> Alright then, how about this: Assuming opencores has a PCI interface
> and a DDR memory controller, I could write a CRT controller. We can put
> that into an FPGA and see what happens.
Well, there is a PCI to wishbone bridge:
http://www.opencores.org/projects/pci/
and a DDR memory controller:
http://www.opencores.org/projects/ddr_sdr/
but do we really need DDR RAM? For the small amount of RAM on the
card, (8 or 16 MB at the most), surely the cost of standard static ram
ICs wouldn't be too off-putting, and it would presumably simplify the
design slightly.
(It's you that's got to build it, though, so it's your call :-) ).
John.
Hi,
Just one more crazy idea ...
Have you thought about cooperating with some kind of commercial company?
ie. SiS tried/is trying to create a 3D GPU without much success ... but
they might be able to help manufacturing GPUs/Cards even in bigger
series and as long as they dont have to invest too much in development
and get some income i think they can do it. And as far as i know a lot
of windows users would appreciate opensource graphics cores because of
the advantages of opensource drivers. I would also say that there would
be enough people interested in writing drivers for windows so your card
could be generally very well marketable. I know this is not windows
mailng list, but speaking of marketability of the core windows has the
greates market share and having it working in windows too would mean
that you are getting at production quotes big enough for reaching low
prices. So why not "abuse" windows to get us a good GPU?
Tomas Zvala
>>>>> "Timothy" == Timothy Miller <[email protected]> writes:
Timothy> Alright then, how about this: Assuming opencores has a
Timothy> PCI interface and a DDR memory controller, I could write
Timothy> a CRT controller. We can put that into an FPGA and see
Timothy> what happens.
I suggest propose the following: spend the next few months designing,
writing documents, and starting on the RTL. In that time PCI Express
(PCI over high-speed serial) motherboards and fairly cheap
next-generation Xilinx Virtex FPGAs with integrated SERDES and a free
PCI Express core from Xilinx should be available.
PCI Express 8X gives you 16 Gb/sec of bandwidth in both directions (32
Gb/sec total) which should be enough to make UMA (ie no memory
attached to the FPGA) palatable. So your proto board is looking like
it has just power supplies, FPGA, and misc. video junk (ie DAC or
digital flat panel support), so it should be reasonably cheap to
design and fab. If you really want to, you could look at putting DRAM
(RLDRAM?) down on the board but I don't think it's worth the cost and
complexity.
(By the way, the Virtex FPGAs also have embedded PowerPC 405 cores
that can run at ~400 MHz, which means a a lot of stuff -- exception
paths, etc -- can be done in firmware if you want)
It's still in the thousands of dollars for this proto stage, but the
Virtex should be fast enough to do something pretty interesting. At
that point either the project takes off and you can look at doing a
full custom chip (I don't think it's worth doing any "rapid chip" of
low-NRE design, since your unit costs will be too high for a
mass-volume graphics chip), or the project dies.
- Roland