[AD] D3D and OGL use different viewports

[ Thread Index | Date Index | More lists.liballeg.org/allegro-developers Archives ]


(Note, these issues are reasonably important bugs but I wouldn't block 5.1.8 on them as they've been there for years).

While fixing the half-pixel shift in the D3D code, I noticed that D3D and OGL backends use different viewports sizes. OGL uses the size of the parent bitmap, while D3D uses the size of the backing texture. If the bitmap has non-power of two dimensions and the system does not support them, the two approaches give different sizes. The projection matrix transforms user coordinates into the viewport coordinates, so having different viewports means that the same projection matrix will produce different outputs. Additionally, making it depend on the backing texture means that one needs to be very careful in using projection matrices with D3D.

I think the OGL method is better, as it hides the fact that there is a backing bitmap. I want to switch D3D to the same method, but I want to double check that D3D wasn't coded that way on purpose (e.g. maybe it's some device loss issue as always?).

Additionally... viewports provide a nice way of implementing sub-bitmap functionality instead of transformation hacks we do today. Neither one of the backends use viewports for this purpose. Was there a reason we implemented sub-bitmaps the way we did instead of using viewports? One benefit of switching to viewports is that it is one way to enable setting the projection matrix while drawing to a sub-bitmap. Right now this just plain doesn't work, and fixing it would touch about as much code as redoing it with viewports.

-SL




Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/