RE: [AD] No more video/system bitmaps

[ Thread Index | Date Index | More lists.liballeg.org/allegro-developers Archives ]


> Bob, could you explain how ARB_vertex_buffer_object works, as an
> example?


ARB_vertex_buffer_object (VBO) is an API for creating and managing
objects that contain vertex data (such as vertices, colors, texture
coordinates, etc).

When applications create VBOs, they also specify a usage hint. This
usage hint is used by the driver to determine the initial allocation
memory region for the vertex data.

For example, if you use STATIC_DRAW (define data once, draw from it
multiple times), the driver would allocate the memory on-board the GPU;
that way, drawing from that VBO is fastest, but updating it (or worse,
reading it back) is slow. Since the app told the driver that it will
likely not update or read back the VBO, then this happens to be the best
memory region to allocate the VBO in.

Different usage hints may place the VBO in different memory regions: AGP
uncached, system memory, etc.


Now, it sometimes happens that applications deviate from the usage
pattern they hinted at. For example, they read back from a VBO that was
defined as STATIC_DRAW (ie: in video memory).

The driver can use heuristics (as it sees fit) to move the VBOs around
(or even make them live in different memory regions at the same time) to
optimize performance.

For example, if the VBO is drawn from a lot, but is also read from a
lot, but rarely (if ever) actually updated, then the driver may want to
instantiate two copies of the VBO: one in system memory for apps to
read, and one in video memory for the GPU to read.

The driver picks the memory region to allocate based on available
memory, usage hints, heuristics, or anything else it might care about.

The application doesn't really need to worry about where VBOs should
live, and if they need to be copied around; it just uses them. The magic
all happens in the back.


Now, back to Allegro. We would like to do something similar. The memory
allocation for bitmaps in DDraw for optimal performance is very
different than the one in OpenGL (for example). DDraw would like
surfaces that are blended to be in system memory (since blending is done
in software), but OpenGL would like them to be in video memory (through
textures).

So apps now need to worry about which driver is used, and what the
bitmap allocation needs to be.

Worse yet, one allocation may not be the best for that bitmap all the
time. For example, you have a scene with unblended sprites. Then, all of
a sudden, you need to blend stuff on the screen for a few seconds of
animation.

You don't want apps to switch the rendering method from page flipping
mode to double-buffering (if that's the one to use at all!).

You really want Allegro to take care of the little details.


Hope this helps.





Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/