Re: [AD] shader deferred sets |
[ Thread Index |
Date Index
| More lists.liballeg.org/allegro-developers Archives
]
On Fri, 01 Mar 2013 15:03:42 -0700, Jon Rafkind <workmin@xxxxxxxxxx> wrote:
> On 03/01/2013 02:52 PM, Peter Wang wrote:
> > On Fri, 01 Mar 2013 12:11:16 -0700, Jon Rafkind <workmin@xxxxxxxxxx> wrote:
> >> The answer here, as suggested by Trent, is to make the deferred shader copy the float vector. I will make this change if no one beats me to it.
> > The same problem applies to al_set_shader_matrix.
> >
> > Since there are a limited number of elements per vector, it would
> > be faster to use fixed sized arrays in the GLSL_DEFERRED_SET structure
> > (use a union to reduce the size) instead of allocating per set.
>
> I think the elements of a vector can be infinite because you can set arrays of vectors.
>
> vec4 x;
> vec4 x[2];
> vec4 x[100];
Oops, I read it wrong.
> > But, I'd also like to know why there are deferred sets.
>
> I understand their usage now. You cannot set a variable on a shader
> until you do glUseProgram first. al_use_shader does this. Without
> deferred sets you would have to use the shader you are about to set
> some uniforms on, then set the old shader back. Deferred sets allow
> you to make changes to a shader without changing the current state.
Right, I confused the existing API with planned changes.
In the API that I proposed, I guess I assumed that al_prepare_set_shader
would/could call glUseProgram anyway. Paul suggested that whatever
al_prepare_set_shader does, could be done implicitly.
So, in all, would al_set_shader_* implicitly changing the current
program object cause trouble for anyone?
Peter