[AD] Thoughts on speeding up X |
[ Thread Index |
Date Index
| More lists.liballeg.org/allegro-developers Archives
]
This is pretty much just a random musing I had earlier tonight. The
reason Allegro's X driver is so slow is somewhat obvious after looking
through the source and realizing what it's doing. It's using a double
buffer for the screen bitmap, and doing an equivilant of a blit to the X
display whenever a piece of it changes. Granted it only does this once
per operation (putpixel, blit, hline, etc), but I think the full
consequence of this is much more serious.
In X11, you start off with a double buffer just by using the driver as
noted above. All gfx operations are done to this memory bitmap (an
XImage). Now, when you add a color conversion into the mix (by selecting
a different screen depth than the current X display), it's using two
buffers (the XImage of X's real depth, and one of Allegro's screen
depth). So we're double buffering a double buffer. On top of this, games
usually employ a double buffer themselves for 3 buffers. No, we're not
done yet.. to add to it, X itself can also be double buffering us. So
that gives us a potential of 4 seperate buffers a single gfx operation
has to go through before not only displaying, but getting control back
for the program to continue.
My initial thought after realizing this is.. why the initial double
buffer? AFAIK, X has drawing primitives we can map getpixel, putpixel,
line, and the like, to. It also has an image structure that we can
potentially hack and use to blit Allegro bitmaps through directly to the
X display. This should significantly speed up matching-color-depth
operations. We can probably leave the double buffer for when color
depths don't match (after all, the docs do state that using a different
depth can impair performance.. although this also applies to fullscreen
X11 too, not just windowed), but include a config option to not allow
this like Windows does.
Thoughts? Ideas?
- Kitty Cat