Re: Using the EQAudioUnit with OpenAL on iPhone
Re: Using the EQAudioUnit with OpenAL on iPhone
- Subject: Re: Using the EQAudioUnit with OpenAL on iPhone
- From: Steven Winston <email@hidden>
- Date: Mon, 24 Aug 2009 16:39:36 -0700
I was suggesting that since you have the PCM data given that it's
OpenAL, you should be able to write most EQ processing on the PCM data
directly. I don't think that OpenAL includes any EQ library, writing
one for most uses is not impossible however.
On Mon, Aug 24, 2009 at 1:37 PM, Darren Baptiste<email@hidden> wrote:
> @Steve - thanks for the clarification. I guess I had the wrong mental
> picture of the process.
> I can wrap my head around the idea of processing the contents of the buffer
> BEFORE calling the play method.
>
> However, your suggestion that doing EQ work inside OpenAL instead throws me,
> because I don't see ANY EQ-type methods in the OpenAL framework provided in
> the iPhone SDK.
>
> Is there another way of exposing these, or is there another library I must
> link to?
>
> Cheers,
> Darren
>
>
> "Be the change you wish to see in the world" ~ Mahatma Ghandi
>
> d{ ~_~ }b
>
> On 2009-08-24, at 2:34 PM, Steven Winston <email@hidden> wrote:
>
>> Hi Darren,
>>
>> My understanding is that you might be thinking about this wrong. When
>> the docs say that OpenAL uses RemoteIO, it shouldn't translate to
>> audio is passed into the "audio-in" line and then outputed in the
>> "audio-out." It's not a pass-through type of thinking. RemoteIO has
>> a render callback, the audio buffer that RemoteIO uses is created and
>> filled in elsewhere. Think of it as if you wanted to produce the
>> audio world's hello world sin wave. You're creating sound within the
>> render call back, not piped into the render pipeline.
>> Now something that you can also think about is OpenAL on any platform,
>> is a high level language for audio processing. Positioning is
>> specifically done by adjusting balance and fade in this specific case
>> (there's other ways to do it). Therefore, if you think about it like
>> that, what you want to gain access to is the post openal processed
>> buffers, (i.e. don't use alSourcePlay() but do all the processing you
>> want prior to that call). Now write your own play function that will
>> fill in a buffer with the data you would have played out, and use that
>> buffer in the AUiPodEQ (although to be honest, I'm fairly certain
>> overall that's probably not what you want as it'd be easier by far to
>> do whatever EQ work you want in OpenAL and forgo using AUiPodEQ).
>> Anyway, that'd be my first idea for how to solve this problem; Good luck!
>>
>> Steve
>>
>> On Mon, Aug 24, 2009 at 6:17 AM, Darren Baptiste<email@hidden>
>> wrote:
>>>
>>> I am using OpenAL to play sounds, due to the positioning features of the
>>> framework. Work well. Is there a way to direct the output from OpenAL
>>> through an AudioUnit (e.g. the AUiPodEQ) before output to hardware?
>>>
>>>
>>>
>>> Or perhaps it has to go the other way round, first through the
>>> AudioUnit(s)
>>> then through OpenAL?
>>>
>>>
>>>
>>> I read somewhere that OpenAL outputs via the RemoteIO AudioUnit. Is
>>> there a
>>> way to redirect through a custom AUGraph instead?
>>>
>>>
>>>
>>> Darren
>>>
>>>
>>> _______________________________________________
>>> Do not post admin requests to the list. They will be ignored.
>>> Coreaudio-api mailing list (email@hidden)
>>> Help/Unsubscribe/Update your Subscription:
>>>
>>> This email sent to email@hidden
>>>
>
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden