• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Correct way to implement an AudioUnit rendering function?
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Correct way to implement an AudioUnit rendering function?


  • Subject: Re: Correct way to implement an AudioUnit rendering function?
  • From: Kurt Revis <email@hidden>
  • Date: Sat, 28 Oct 2006 02:11:46 -0700

On Oct 27, 2006, at 2:53 PM, Stephen F. Booth wrote:

StreamDecoder::readAudio()
=======================
[conditionLock lockWhenCondition:hasData];
... fill the requested buffer with audio from our internal buffer...
[conditionLock unlockWithCondition:(either hasData or needsData depending on how much remains in the buffer)


StreamDecoder::fillBuffer (a separate thread that fills in the decoder's buffer as needed)
===================
[conditionLock lockWhenCondition:needsData];
... read data from the file and convert to PCM, placing in the internal buffer
[conditionLock unlockWithCondition:hasData];


This approach seems to me that it should work, but it doesn't! I assume there is some sort of of negative interaction occurring between the AudioUnit thread that calls MyRender() and subsequently readAudio(), and the thread that is performing the reads from disk. Have I missed something obvious in my logic?

Just the knowledge that your rendering function needs to complete quickly, which effectively means that it shouldn't block. With your current setup, you are pretty much guaranteed that it will block a lot. Any time the feeder thread is holding the lock -- which is a very long time, by audio standards, if it's waiting for disk reads to finish -- the audio thread will be blocked. In fact, you're not gaining anything at all by having them be separate threads, since they can never run at the same time.


(This is a pretty classic example of "priority inversion". The high priority audio thread is stuck waiting for the lower priority feeder thread, so effectively it has just as low of a priority. It can't fulfill the promises that the audio system needs it to, so you get dropouts.)

Suggestions:

1) Don't ever block on the audio thread.

Come up with some non-blocking way to signal the other thread, if you need to. Last time I checked, the most convenient (but not the most well-known) guaranteed non-blocking way to wake up another thread was to use Mach semaphores. Call semaphore_signal() in the audio thread and semaphore_timedwait() in the feeder thread.

You can also get away with pthread_cond_signal(); theoretically it can block for a little while, but in practice it isn't a problem. NSConditionLock is not ideal, because it provides no way to signal without acquiring the lock. (You can make sure the feeder thread doesn't take the lock for very long, but still, you're better off using the lower-level APIs.)

Or, just set a timer to wake up the feeder thread periodically; since you know how quickly the data will be getting consumed, it's pretty reliable to just wake up when you know you'll have a reasonable amount of work to do.

2) Your feeder thread should read and decode much more than one buffer's worth of data. (In this context, "one buffer" means "the amount of data the output AU is going to ask you for.) You're playing sound from a file, so latency is not an issue, so buffer up as much data ahead of time as you can. This way, even if your feeder thread doesn't get to run for a while (because other things are going on in the system), you'll still have a lot of slack.

3) It helps to make your feeder thread slightly higher priority, and/ or to make it a fixed-priority thread, so it wakes up more consistently.


What type of buffering do output audio units perform internally? Does one need to have a certain amount of audio ready to go in a buffer when MyRender() is called, or is buffering performed internally in the AU so that it is acceptable to hit the disk every time MyRender() is called?

The output AUs don't do any extra buffering. They want their data and they want it now; any necessary buffering is up to you.


Would it make more sense to write an AudioCodec along with another subclass of AudioFile and use the AUFilePlayer API? Obviously I am duplicating some effort by not using the AUFilePlayer, but I don't need all the horsepower it brings. It is markedly simpler to implement a render callback than an AudioCodec and an AudioFile subclass.

I haven't done this, so I can't comment in any detail, but it's probably the right way to go in the long run. It would mean you could just use the AudioFilePlayer AU and not have to worry about any of this stuff. See the example in /Developer/Examples/CoreAudio/ SimpleSDK/PlayFile.


This is more a Cocoa question, but are NSConditionLocks too slow to use in this context? I can't imagine that they would be since they're just wrappers around pthreads and semaphores.

It's not that they're slow, it's that they're not appropriate to this situation.


Anyway, if you continue down this road, you should know what other examples are out there.

I have some pretty ancient code that's pretty similar to what you're doing:
http://www.snoize.com/Code/PlayBufferedSoundFile.tar.gz
Please *don't* use it as-is -- it's doing some nasty stuff with QuickTime, see the Read Me file -- but feel free to steal any ideas from it. The basic concepts haven't changed too much since I wrote it. You could probably plug your own code into - convertIntoRingBuffer without changing too much else.


There's also the code in /Developer/Examples/CoreAudio/Services/ AudioFileTools. 'afplay' does pretty much what you want. All the interesting buffering stuff is in CABufferQueue.h/cpp, specifically the class CAPullBufferQueue -- it lets you pull buffers from one thread and fill them in another thread. CAAudioFileReader (in CAAudioFileStreamer.h/cpp) is a subclass of CAPullBufferQueue that does the filling from an AudioFile.

(The problem with this sample code is that all the interesting stuff is buried under piles of C++ and subclasses and multiple inheritance and little wrapper functions and alternate-universe Windows code. It would be much more comprehensible if there was a simple diagram or explanation of how it all fit together. It may be useful code but it sure isn't very helpful teaching code.)

--
Kurt Revis
email@hidden

_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden


  • Follow-Ups:
    • Re: Correct way to implement an AudioUnit rendering function?
      • From: "Stephen F. Booth" <email@hidden>
References: 
 >Correct way to implement an AudioUnit rendering function? (From: "Stephen F. Booth" <email@hidden>)

  • Prev by Date: Re: Reading the contents of an Audio file
  • Next by Date: MOTU devices on Intel machines
  • Previous by thread: Re: Correct way to implement an AudioUnit rendering function?
  • Next by thread: Re: Correct way to implement an AudioUnit rendering function?
  • Index(es):
    • Date
    • Thread