Hi Brian,
I misread the description of GetLatency function, it returns the amount of seconds, not frames and I didn’t pay attention to the return type. I thought it returns an integral value and thus have some sort of a statistical meaning to the graph. Thank you for opening my eyes.
In other words, you must disconnect the actual AU input and output from your algorithm such that the surrounding queues deal with the number of samples that the AU Host is controlling, while your inner stage always processes 2*N frames to consume N frames of input and add N frames to the output. You're basically working with a sliding window on the audio samples - and you must maintain this sliding window yourself because the AU graph host will not do it for you.
The good news for me is that I already implemented the independent sliding windows for both input and output, so I just need to change the input sliding window from the pull to the push model. Actually I was going to do something like this, but without GetLatency function I guess I’d fail and thanks to you I now know that this is the right way, not an ugly workaround :)
p.s. Since your algorithm requires 2*N frames to process, I believe that your minimum latency is also 2*N frames. I tried to consider whether you could get by with only N frames of latency, but given that the AU Host could ask you to Render only 1 sample, I don't think your algorithm could work unless it's guaranteed the full 2*N latency buffering.
Correct.
Once again thank you for your detailed answer.
— Thanks, Roman
Hi all, I develop an Audio Unit of the ‘aufx’ type that requires to read some data ahead, e.g. to output N frames I need 2 * N frames.
I have a class derived from AUBase with the overloaded AUBase::Render OSStatus DeclipUnit::Render(AudioUnitRenderActionFlags &ioActionFlags, const AudioTimeStamp &inTimeStamp, UInt32 nFrames)
and have the following snippet of code in it. I simplified the code, actually I need to buffer several packets before applying an effect. Here I just demonstrate the problem:
newTimeStamp.mSampleTime += nFrames; result = m_pMainInput->PullInput(ioActionFlags, newTimeStamp, 0, nFrames); if (result != noErr) return result;
where m_pMainInput == GetInput(0); so I pull from the upstream object the same amount of frames but at another timestamp. Then I initialize the output buffers like this:
m_pMainOutput->PrepareBuffer(nFrames);
and fill them with data, for simplicity like this:
for (UInt32 channel = 0; channel < GetOutput(0)->GetStreamFormat().mChannelsPerFrame; ++channel) { const AudioBuffer *srcBuffer = &m_pMainInput->GetBufferList().mBuffers[channel]; const AudioBuffer *dstBuffer = &m_pMainOutput->GetBufferList().mBuffers[channel];
memcpy(dstBuffer->mData, srcBuffer->mData, srcBuffer->mDataByteSize); }
This works pretty well in the AULab application but auval tool fails with the following error:
Input Format: AudioStreamBasicDescription: 2 ch, 44100 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved Output Format: AudioStreamBasicDescription: 2 ch, 44100 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved Render Test at 512 frames ERROR: AU is not passing time stamp correctly. Was given: 0, but input received: 512
* * FAIL -------------------------------------------------- AU VALIDATION FAILED: CORRECT THE ERRORS ABOVE. --------------------------------------------------
If I remove the line that changes the timestamp and call just
result = m_pMainInput->PullInput(ioActionFlags, newTimeStamp, 0, nFrames); if (result != noErr) return result;
then auvall validates my plugin successfully. So I’m pretty sure that auval concerns that my Audio Unit pulls data from a different timestamp. I found a property that looks like to what I need to make auval sure that all is fine with the timestamps: kAudioUnitProperty_InputSamplesInOutput but auval never sets this property for my audio unit, so it looks useless.
Does anybody know if reading data at appropriate timestamps is possible for ‘aufx’ audio units? I tried to look at ‘auol’ (offline) units, but they seem to be of no use, at least neither AULab, nor Final Cut Pro uses this type of audio units.
-- Please help! Roman
|