A number of simple questions
A number of simple questions
- Subject: A number of simple questions
- From: Seth Willits <email@hidden>
- Date: Wed, 14 Nov 2012 15:08:45 -0800
- Why is AudioTimeStamp.mSampleTime a float if it's an integer number of samples?
- Will all AudioUnits automatically have their input and output stream formats set to the cannonical form?
- Can you ask an AudioUnit to render the same timestamp multiple times, or might a unit upstream depend on the timestamp being monotonic?
- When choosing and setting an LPCM stream format, is there any reason to not set mFramesPerPacket to 1?
- For an audio player unit, I noticed that in the ScheduledAudioFileRegion structure, mFramesToPlay is UInt32. This seems strange because that limits files to being less than 12 hours long at 96khz doesn't it? Seems odd.
- How many frames should I render at a time? 512? 1024? Any reason to stick to power of two? I imagine the answer might depend on whether this rendering has to happen in real-time or not. If I'm calling AudioUnitRender() for offline rendering, and I'm interleaving with video at 30 fps, and my audio is 44.1khz then is there an advantage to grabbing 1470 frames (1/30th of a second) versus 512 frames at a time? Does the number of frames I render matter if, say, the destination is AAC? (Perhaps there's some minimum or maximum number of frames to give it?)
--
Seth Willits
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden