remoteio glitches - minimum workable buffer size? maximum available cpu time?
remoteio glitches - minimum workable buffer size? maximum available cpu time?
- Subject: remoteio glitches - minimum workable buffer size? maximum available cpu time?
- From: "Ross Bencina" <email@hidden>
- Date: Sat, 29 May 2010 21:12:30 +1000
Hi Everyone
I'm trying to get some low-latency network audio streaming code up on a
current-generation ipod touch/latest 3.x iphone os. Using remoteio I'm
getting glitches which seem like remoteio deadlines getting missed -- they
happen when there's on screen animation (like the keyboard sliding out) or
often when a 2 line text label is updated every 10 seconds from an NSTimer
callback. My remoteio callback is completely lock and blocking-api free, but
it does use some CPU... I've included some info about why I'm using remoteio
later.
My questions are:
- I'm currently using a 256 sample callback buffer size at 44.1k (set using
kAudioSessionProperty_PreferredHardwareSampleRate ,
kAudioSessionProperty_PreferredHardwareIOBufferDuration). Should I expect
this setting to deliver stable audio? Can I use even smaller buffers?
- My remoteio callback is using about 50% CPU (based on readings from
Instruments, another ~5% goes on media server and ~10% on network stack).
The callback is running a CELT decode on every frame. Is there something
about the way audio HAL thread deadlines are scheduled on iphoneOS which
would make 50% "too much" to consume in the render callback, and hence get
the thread descheduled? How much CPU time should I be able to use reliably?
- How are thread priorities organised on iphoneOS, does audio have maximum
priority or does it have to compete with display/animation UI stuff?
- Is there anything equivalent to HAL telemetry I can access on iphoneos?
Any hints on how to diagnose thise kind of think in iphoneOS?
Before you ask, here's why I'm not using AudioQueues:
0. This application requires minimum possibly latency. (I'd like <5ms from
end of decode to analog playout)
1. My packets may arrive out of order, since packets can't be decoded out of
order I aleady need to queue them and keep them in a sequence re-assembly
data structure -- I'm doing so much work already it doesn't make a lot of
sense to interpose another queuing layer.
3. In the final implementation I'm going to need clock-skew management
logic/variable SRC. Please correct me if I'm wrong, but I'm under the
impression AudioQueue isn't going to make that any easier.
Factors arising from (1) seems like a good fit for remoteio since the remote
io callbacks provide a natural time cutoff where I can decide whether a
network packet is too late and hence choose to apply packet concealment at
the last possible moment. I could do this in a separate thread that feeds
remoteio but that would just amount to another real-time scheduled thread
that runs in lock step with remoteio, and add a buffer's worth of latency --
although if the remoteio thread isn't able to provide enough CPU time
perhaps this is the best option (?).
All suggestions welcome.
Thank you!
Ross.
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden