• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
newbie asking what's OK to do on realtime audio thread
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

newbie asking what's OK to do on realtime audio thread


  • Subject: newbie asking what's OK to do on realtime audio thread
  • From: Leo Thiessen <email@hidden>
  • Date: Tue, 27 Jan 2015 10:07:57 -0600

Hi folks,

I’m new to coreaudio programming and audio programming in general.  Having read through some online materials, I’m trying to wrap my head around what is safe to do on in a realtime audio thread render callback and what’s not.  As far as I can determine I should really only be calling functions that have guarantees about the way they behave: fast, no memory allocations/freeing, consistent cpu demand vs high spikes of demand, etc.

This is maybe really obvious, but help me out: how do I know/find out what’s safe to do?  For example, can I call AudioUnitGetProperty() in a render thread?   I’m suspecting the answer might be it a “that depends…” type; but any pointers to help me locate the answers would be much appreciated.  Included here is an example of what I’m doing in a render thread I’m working on.  Is this sane/OK to do?

I’m using theamazingaudioengine.com, running a Mac OS X 10.9+ and a iOS 7+ target executable.

static OSStatus _renderCallback2(__unsafe_unretained VEAETrack         *THIS,
                                 __unsafe_unretained AEAudioController *audioController,
                                 const AudioTimeStamp                  *time,
                                 UInt32                                 frameCount,
                                 AudioBufferList                       *audio) {
    
    // Do the main audio processing
    THIS->_superclassRenderCallback(THIS, audioController, time, frameCount, audio); // the superclass uses an AUAudioFilePlayer to render the audio, then applies a gain and pan filter using Apple’s vdsp functions
    
    // Get our current time data
    if(noErr==AudioUnitGetProperty(THIS->_au,
                                   kAudioUnitProperty_CurrentPlayTime,
                                   kAudioUnitScope_Global,
                                   0,
                                   &THIS->_audioTimeStamp,
                                   &THIS->_audioTimeStampSize)) {
        UInt32 currLoopCount = floor(THIS->_audioTimeStamp.mSampleTime / THIS->_mFramesToPlay);
        THIS->_currentTime = (THIS->_audioTimeStamp.mSampleTime - ((float)currLoopCount * THIS->_mFramesToPlay)) / THIS->_outSampleRate;
        
        // Check for callbacks to be done
        if(THIS->_completionBlock) {
            if(THIS->_isLooping) {
                // If we are on a new loop number, trigger completion callback
                if(currLoopCount > THIS->_numLoopsCompleted) {
                    THIS->_numLoopsCompleted++;
                    AEAudioControllerSendAsynchronousMessageToMainThread(audioController, _notifyCompletion, &THIS, sizeof(VEAETrack*)); // does not lock/block the realtime thread
                }
            } else {
                // If we're in the last renderCallback of a non-looping channel, trigger the completion callback
                UInt32 remainderPlusFramesThisRender = ((UInt32)THIS->_audioTimeStamp.mSampleTime % THIS->_mFramesToPlay) + frameCount;
                if(remainderPlusFramesThisRender >= THIS->_mFramesToPlay) {
                    AEAudioControllerSendAsynchronousMessageToMainThread(audioController, _notifyCompletion, &THIS, sizeof(VEAETrack*));
                }
            }
        }
    }

    return noErr;
}

The part I’m wondering about is my call to AudioUnitGetProperty(…) - how would I know or find out if that’s OK to do realtime like above?  How about other coreaudio functions such as calling MusicDeviceMIDIEvent() on an Apple AUSampler instrument audio unit, from the realtime audio thread?  Or must I run my own, separate thread parallel to the realtime audio thread to do these types of things?





 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:

This email sent to email@hidden

  • Follow-Ups:
    • Re: newbie asking what's OK to do on realtime audio thread
      • From: Mark Wise <email@hidden>
  • Prev by Date: Re: AVAudioFormat Standard format with Core Audio AudioGraph
  • Next by Date: Re: newbie asking what's OK to do on realtime audio thread
  • Previous by thread: Re: Sub Categories For Company Audio Units
  • Next by thread: Re: newbie asking what's OK to do on realtime audio thread
  • Index(es):
    • Date
    • Thread