On 2008-04-28, Ryan Dickie <
goalieca@xxxxxxxxxx> wrote:
>
> Hmm, sorry. I guess I got a bit carried away trying to refactor in order to
> make things more manageable.
>
> I was on #allegro-dev and I found out the 'mixer' was meant to be more of a
> 'filter' system where people could do fancy audio effects processing. I
> could add something like what i describe below.
Actually, there is a much simpler purpose: we need to be able to support sound
cards/drivers which don't support multiple voices and we need to be able to
work with samples/streams with differing same sampling rates or formats.
AFAICS all that code is gone and I guess you rely on OpenAL to handle that.
But OpenAL won't be the only backend.
Yes, I rely on OpenAL for this. From what I can gather, ALSA, pulseAudio, CoreAudio, DirectSound, and the Vista/Xbox api all handle this as well.
If we do need it, then the correct place to put the mixer code would be in the driver and not part of the user framework. The mixing should happen transparently and not require the user to manage it.
> Each voice has an attached sample. Each voice could also have a set of
> attached 'filters'. We could do something like this:
>
> //generate kernels based on sample properties like depth, frequency, etc.
> //a small set of stock kernels, users will obviously have to make their own
> to do custom effects
> void* kernel1 = al_filter_make_kernel(AL_SAMPLE* spl,
> ALLEGRO_GAUSSIAN_KERNEL);
> ...
> //adding a filter simply convolves the kernels always yielding a single
> kernel
> //this single kernel will be convolved with the sound sample when it is time
> // f * (g * h) = ( f * g ) * h
> // scalar a for amplification/attentuation can be factored in as well.
> al_voice_add_filter( void* kernel1, int len );
> al_voice_add_filter( void* kernel2, int len );
> al_voice_clear_filters();
> ...
You'll have to explain this idea more.
Peter
Sorry if some of this is review, I don't know if everyone is familiar with DSP so i'll just start with a rough overview.
Any audio stream can be described as a function of time. That is y = x(t) where y is the output of the sound card. Basically you can think of y as a vector (each component is a channels) and t as the sample position.