Re: remoteio glitches - minimum workable buffer size? maximum available cpu time?
Re: remoteio glitches - minimum workable buffer size? maximum available cpu time?
- Subject: Re: remoteio glitches - minimum workable buffer size? maximum available cpu time?
- From: Doug Wyatt <email@hidden>
- Date: Tue, 1 Jun 2010 10:22:50 -0700
You shouldn't have to mess with that thread's scheduling parameters. There are lots of other apps running compute-intensive RemoteIO threads without encountering these issues.
The computation time just affects how often the scheduler will check and round-robin a thread with other realtime threads. That changing this value appears to address the issue is interesting. You say you've made your socket-listener thread run at realtime priority. So I wonder if a lock being taken on the socket-listener thread is causing a priority inversion. (If it were, that might be considered a scheduler bug if the other realtime thread doesn't get to run instead.) A kernel or Instruments trace ought to be able to tell us what other threads are running when your RemoteIO thread is supposed to be doing its work.
Also, there's a method in CAPThread that can display the current scheduled priority of a thread. You should be able to use this to verify that the thread really is in the realtime band (96-97).
Doug
On May 31, 2010, at 7:46 , Ross Bencina wrote:
> Thanks Aran...
>
>> Are you getting these in release mode, or even when not running
>> from the debugger. I have had this issue when pushing a view
>> when running in the debugger but not when the run is initiated
>> from the device (not xcode)
>
> Pretty much the same regardless. It glitches in release mode launched by hand, not connected to computer.
>
>> im using this code and it gives me a buffer size of 64 samples.
>
> Ok, so my buffer size is not the problem then.
>
> I decided to look into how the RemoteIO proc thread was scheduled using thread_policy_get()...
>
> thread_policy_t ttcpolicy;
> mach_msg_type_number_t count = THREAD_TIME_CONSTRAINT_POLICY_COUNT;
> boolean_t *get_default = 0;
> kern_return_t theError = thread_policy_get(
> mach_thread_self(),
> THREAD_TIME_CONSTRAINT_POLICY,
> (int *)&ttcpolicy,
> &count,
> &get_default);
>
> For a 44.1k strean, 256 frame buffers I get:
>
> ttcpolicy.period = 2229115
> ttcpolicy.computation = 12000
> ttcpolicy.constraint = 2229115
> ttcpolicy.preemptible = 1
>
>
> <computation> is about .5% of the period, so doing anything other than copying memory (and perhaps even that) in the IOProc is likely to get preempted by other RT threads (not really a big problem unless there isn't enough CPU to go around, and Instruments wasn't showing CPU maxing out), but.. I suspect that this is getting the thread demoted to non-real-time status.
>
> I've seen a couple of different definitions of these parameters floating around, but my understanding is that <constraint> sets a deadline for completion, and <computation> is either the total (maximum) computation per period, or the computation before initial preemption. If the former, then obviously doing a 50% CPU CELT decode is not going to work out well... and if the latter, I'm not sure this counts as a "reasonable" value -- it's trying to chew the whole period -- and apparently the Mach scheduler will demote "unreasonable" threads.. still I'm sure the guys at Apple have more idea about this than I do... by the way this link was pretty handy:
> http://developer.apple.com/mac/library/documentation/Darwin/Conceptual/KernelProgramming/scheduler/scheduler.html
>
>
> I tried setting the computation level to 66% for my first RemoteIO callback:
>
> ttcpolicy.computation = (ttcpolicy.period * 2) / 3; // 66%
> thread_policy_set( mach_thread_self(), THREAD_TIME_CONSTRAINT_POLICY, (int *)&ttcpolicy, THREAD_TIME_CONSTRAINT_POLICY_COUNT );
>
> And it fixes the glitching =)
>
> Actually, I had to set a 10% compute real-time policy on my socket listener thread (which currently does a memcpy) otherwise the GUI would starve out incoming network packets, especially when doing the text-select-finger-drag-with-magnifying glass thing.
>
> I'm not sure if incresing real-time compute margins like this is best practice though. Does anyone have any other suggestions?
>
>
> One weird thing is, the numbers returned by thread_policy_get don't change even if I switch the the session to use 512 sample buffers or 128 sample buffers using kAudioSessionProperty_PreferredHardwareIOBufferDuration, even after power cycling the ipod. Perhaps internally RemoteIO always uses a 256 frame buffer and just calls the client multiple times if it asks for smaller buffers? -- that would certainly explain why I was seeing a lot more glitching when I switch to 512 frame buffers.
>
>
> I spent quite a while trying to determine the units for THREAD_TIME_CONSTRAINT_POLICY time values on the iphone/ipod. I've included my findings below incase they help someone else. It's still a bit confusing so could someone from apple please confirm whether this stuff is correct?
>
>
> Firestly, THREAD_TIME_CONSTRAINT_POLICY_COUNT units don't seem to be based on the absolute time values discussed in QA1643:
> http://developer.apple.com/iphone/library/qa/qa2009/qa1643.html
>
> My understanding is that THREAD_TIME_CONSTRAINT_POLICY time values are in "abstime" or "AbsoluteTime" units. There seem to be varying descriptions of what these are. QA1643 suggest that they can be retrieved in nanos from mach_timebase_info()...
>
> mach_timebase_info_data_t info;
> mach_timebase_info(&info);
>
> which on this this ipod has the values:
>
> info.numer = 1000000000
> info.denom = 24000000
>
> 1000000000 / 24000000 = ~41.47 nanos per abstime unit.
>
> But this gives strange ttcpolicy values
>
> ttcpolicy.period = 2229115 * 41.47 = 92441399.05 nanos (~92 milliseconds)
> ttcpolicy.computation = 12000 * 41.47 = 500000 nanos (0.5 milliseconds)
> ttcpolicy.constraint = 2229115 * 41.47 = 92441399.05 nanos (~92 milliseconds)
>
>
>
> After some more digging, I discover that maybe ttcpolicy are in "AbsoluteTime units, which are equal to 1/4 the bus speed on most machines."
> (source: http://music.columbia.edu/pipermail/portaudio/2002-February/000486.html)
>
> Using the code at that link to get the bus speed (sysctl(),CTL_HW, HW_BUS_FREQ), I get a bus speed of 100000000. Which, when combined with 1/4 gives more sensible values for ttcpolicy:
>
> ttcpolicy.period = 2229115 / 400000000 = 0.00557279 (~5.57 milliseconds)
> ttcpolicy.computation = 12000 / 400000000 = .00003 (30 microseconds)
> ttcpolicy.constraint = 2229115 * 41.47 = 0.00557279 (~5.57 milliseconds)
>
> Expected period of 5.8ms for 256 samples @ 44,1k. -- 5.57ms is a plausible period for +/- 5% tollerance DAC clock.
>
>
> But, as I said above, the ttcpolicy values aren't changing when I change the RemoteIO buffer size.
>
> So I'm left wondering whether thread_policy_get() is working, and whether RemoteIO is even using the correct units for thread_policy_set?
>
>
>
> Can anyone shed some light on this please?
>
>
> Thank you.
>
>
> Ross.
>
>
>
>
>
>
> _______________________________________________
> Do not post admin requests to the list. They will be ignored.
> Coreaudio-api mailing list (email@hidden)
> Help/Unsubscribe/Update your Subscription:
>
> This email sent to email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden