• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Correct way to implement an AudioUnit rendering function?
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Correct way to implement an AudioUnit rendering function?


  • Subject: Re: Correct way to implement an AudioUnit rendering function?
  • From: Eric Lee <email@hidden>
  • Date: Fri, 27 Oct 2006 16:31:57 -0700

Hi Stephen,

The rendering occurs on a high priority thread, and thus locks and disk accesses are not a good idea. Moreover, each render callback is expected to finish within a certain period of time -- for a render block size of 1024 samples and sampling rate of 44.1 kHz, for example, then your render callback should complete in (less than) 23 msec, otherwise you will get drop outs.

Eric

On Oct 27, 2006, at 2:53 PM, Stephen F. Booth wrote:

Hi all,

I am working on an audio player application and I've run into some buffering/threading issues that I can't quite figure out. I'm at my wit's end so I figured I could ask here for some help and clarification.

I've implemented an AudioPlayer class that uses the default output AU to play audio; the audio's source is a file on disk in some format not natively supported by Core Audio. I use a custom render function which is a thin wrapper around the player's current stream, an instance of a StreamDecoder subclass that provides PCM data. While playback works, I am able to cause audible artifacts simply by launching an application or performing another task. I don't understand why this is happening, likely due to my ignorance of the way AudioUnits (in this case the output unit) work internally. I've read as much as I can find on the topic and still have a few questions. The playback logic, In pseudo-Objective C looks like:

MyRender()
=========
	player = (AudioPlayer *)inRefCon;
	[[player streamDecoder] readAudio:...];

StreamDecoder::readAudio()
=======================
[conditionLock lockWhenCondition:hasData];
... fill the requested buffer with audio from our internal buffer...
[conditionLock unlockWithCondition:(either hasData or needsData depending on how much remains in the buffer)


StreamDecoder::fillBuffer (a separate thread that fills in the decoder's buffer as needed)
===================
[conditionLock lockWhenCondition:needsData];
... read data from the file and convert to PCM, placing in the internal buffer
[conditionLock unlockWithCondition:hasData];



This approach seems to me that it should work, but it doesn't! I assume there is some sort of of negative interaction occurring between the AudioUnit thread that calls MyRender() and subsequently readAudio(), and the thread that is performing the reads from disk. Have I missed something obvious in my logic?


The following are some things I'm unclear on:

What type of buffering do output audio units perform internally? Does one need to have a certain amount of audio ready to go in a buffer when MyRender() is called, or is buffering performed internally in the AU so that it is acceptable to hit the disk every time MyRender() is called?

Would it make more sense to write an AudioCodec along with another subclass of AudioFile and use the AUFilePlayer API? Obviously I am duplicating some effort by not using the AUFilePlayer, but I don't need all the horsepower it brings. It is markedly simpler to implement a render callback than an AudioCodec and an AudioFile subclass.

This is more a Cocoa question, but are NSConditionLocks too slow to use in this context? I can't imagine that they would be since they're just wrappers around pthreads and semaphores.

	Have I just missed something obvious?

Thanks,
Stephen
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
40rwth-aachen.de


This email sent to email@hidden


_______________________________________________ Do not post admin requests to the list. They will be ignored. Coreaudio-api mailing list (email@hidden) Help/Unsubscribe/Update your Subscription: This email sent to email@hidden
References: 
 >Correct way to implement an AudioUnit rendering function? (From: "Stephen F. Booth" <email@hidden>)

  • Prev by Date: Correct way to implement an AudioUnit rendering function?
  • Next by Date: Re: Reading the contents of an Audio file
  • Previous by thread: Correct way to implement an AudioUnit rendering function?
  • Next by thread: Re: Correct way to implement an AudioUnit rendering function?
  • Index(es):
    • Date
    • Thread