Re: [AD] Allegro 5's new graphics core |
[ Thread Index |
Date Index
| More lists.liballeg.org/allegro-developers Archives
]
Shawn Hargreaves wrote:
[snip]
Actually I think that mapping Allegro graphics called onto GL calls
would be just as much a waste of time as reimplementing GL itself
inside Allegro. It just doesn't fit well. Even if you make the screen
a radically different thing to bitmaps (which would be a great shame
for people doing regular 2D work, as it would remove a huge amount
of flexibility and capabilities).
You make very good points. I'd like to keep the "Everything is a bitmap" too.
However, I don't see how a mapping to GL (for a single driver mind you,
unless someone writes a D3D driver). Yes, I know, operations on the screen
will be pathetically slow. However, let's consider what Allegro games
currently do when it comes to the screen (and video bitmaps). I'll assume
double buffering isn't being used, becasue if it is, then GL won't bring any
advantage (speed wise or other). Games load their data from datafiles, then
copy part of their bitmaps to video memory, then use accelerated blits to
draw on the screen (or second page). Bitmaps in video memory are only very
rarely modified. They are almost never read back however. The screen is
sometimes read back to do some neat graphic effects (distortions?), however,
I have to see this go beyond simple experimentation.
My point is, Allegro games today (which don't double buffer), already fit
the paradigm of "put textures in video ram, then never touch them". So GL
can only make this better, by allowing blending of video bitmaps.
[snip]
In 3D, even if the screen was different to regular bitmaps, it would
still be impossible to provide an efficient version of something
like al_draw_bitmap_to_screen(bmp, x, y), because functions like
glDrawPixels() or glTexImage2D() are not the optimised path in most
hardware/drivers.
I do agree. I was planning something about that for AllegroGL: The idea is
that first time you blit something from memory to screen, it gets converted
to a texture, then the texture number is assigned back to the bitmap, and a
dirty flag is reset. Whenever something is drawn in the bitmap, the dirty
flag is set, so that the texture gets re-uploaded the next time. If the
dirty flag wasn't set, then the texture already in vram gets used.
This can be done trivially from within Allegro, but convincing other add-on
writters to update their code is a whole other issue.
On a lot of cards it will be impossible to even
just draw everything to a memory surface and then blit that result
across to a 3D display at anything approaching a decent framerate.
Well, if you double buffer, there's no point in using OpenGL really.
[snip]
One final note: I will not even try to change Allegro so that it better fits
GL. I'd (optimally) like for Allegro to be changed to work with with 2D
consoles, or arcade machines of some sort, where sprite drawing is
accelerated. I also like for Allegro to support multiple displays and/or
windows. I'd also like it for Allegro to profide a framework so that games
don't need 16 separate render paths to draw in various ways which are more
optimal on certain drivers. This is what motivated me into proposing the new
system. It's just a side effect that it seems to work well with the way
AllegroGL currently does things.
--
- Robert J Ohannessian
"Microsoft code is probably O(n^20)" (my CS prof)
http://pages.infinit.net/voidstar/