I'm developing an audio analyser application that sends audio to an external device and
evaluates the returned input in real time. I'm new to core audio and would appreciate advice
as to how to best approach simultaneous live input and output. To complicate things I'm
developing on new intel hardware that doesn't allow a single AUHAL to handle I/O and the
application needs to be able to know the latency in samples of the input stream relative to
the output stream, so the input analysis algorithm can sync to the output test signal
generator.
So far I think I'm looking at 3 approaches:
1. I've already setup 2 AUHAL's to handle I/O but testing their inTimeStamp.mSampleTime
doesn't yield sample points that refer to a global sample clock. I presume that each AUHAL
generates it's own mSampleTime reference so should I look to using inTimeStamp.mHostTime to
sync the streams?
You can use kAudioOutputUnitProperty_StartTimestampsAtZero to tell AUHAL to give you the raw time stamps from the HAL. This gets allows you to access the HAL's time conversion routines which you will need to do because these two devices have different sample clocks but the same host time clock.
Thus to convert a sample number from device A into a sample number on the device B, you do this:
1) Call AudioDeviceTranslateTime on device A to convert the sample time in A's sample clock to a host time.
2) Then you take the host time you got and translate that back to a sample number in B's time base by calling AudioDeviceTranslateTime on B.
Note that if you are using a time stamp that already has a host time, like the time stamp passed to your AUHAL render proc, you can skip step 1 and go straight to step 2.
2. Should I be looking into programmatically creating an aggregate device, which I assume
would return the sample sample reference for each stream.
An aggregate device is a great solution for what you are doing. It will provide your input data and output data in the same IOProc which minimizes the overhead, latency and hassle for you. Plus, everybody will be in the same sample time base, so you won't have to do any time translations.
3. Or would it be more straight forward, as I don't need to use any audio units/converters
to add IOProc's directly to the appropriate audio devices
If you did this, you'd have the sample problems you have with option 1, plus the additional work of being a proper HAL client which using AUHAL saves you from having to do.
Note that you should keep using AUHAL even if you go the aggregate device route.