Re: Best approach for a clock source
Re: Best approach for a clock source
- Subject: Re: Best approach for a clock source
- From: daniel medina <email@hidden>
- Date: Tue, 26 Jan 2010 15:14:06 +0100
Thanks a lot for your input Brian, you've really redirected me to the right path. I think that i have quite some things to change in my code now. But i have now another question:
I initially opted for an independent timer becouse i needed to determine the timestamp of a real time input event (i.e., somebody playing a note). The core audio callback is called at a constant rate, in the case of remoteIO the buffer size is typically 1024 samples, that is 23 ms. If i use the core audio timeline this would be the maximum precision to determine the timestamp of a realtime input event. Or that is what I thought...
With your approach, maybe the right thing to do would be to store the host time at the start of each core audio callback, and then calculate the real time event timestamp using the host time of the received event, the host time of the callback and the callback timestamp. Is the calling-time of the core audio callback the correct reference to do this?
Thanks,
daniel
On Jan 26, 2010, at 2:15 PM, Brian Willoughby wrote:
> Daniel,
>
> Your last sentence is correct: This approach is just plain wrong.
>
> Context switches are expensive, and you've created a design where every time quantum is supposed to both signal a semaphore in one thread and wake up in another thread. Even if you're sure that you're going to do something different on every time quantum, that's still the least efficient way to do it. What's worse is that any time quanta where you are not scheduling anything will simply waste a lot of CPU for nothing. Whenever semaphores are the correct design, you only want to signal them when something is actually going to occur, not on every tick. But you really don't want semaphores at all for the situation you describe.
>
> When it comes to audio, you already have a time line from CoreAudio which is as accurate as the system can possibly provide. You need not create any thread of your own to track time. All you need to do is convert BPM to time and then schedule your audio according to the CoreAudio time line. When rendering a given buffer, you should be able to quickly determine whether your next audio event happens within the buffer, and your code can then render the scheduled audio at the correct offset within the buffer.
>
> Remember, no matter what you do to trigger a semaphore, you can cannot produce audio instantly. The best you can do is alter the next buffer of audio. Thus, semaphores can never be real time for audio. Instead of waiting for the semaphore to fire, you just need to predict when the next scheduled event will occur, and wait for the buffer containing that audio event to be requested so that you can then render it as soon as possible.
>
> Your approach actually increases latency, no matter whether you use RemoteIO, because CoreAudio is always processing audio for "the future." If you blindly render audio in complete buffers based on "old" information, and wait until an event is supposed to happen "now" then you're too late because the audio buffer for "now" has already been sent to CoreAudio. Instead, what you want to do is structure your code to think in terms of "the future" so that you can predict in advance when an event is scheduled to happen, then render the complete audio when CoreAudio asks for it. CoreAudio will take care of the task of making sure than an audio buffer plays at exactly the time that CoreAudio says it will play.
>
> The approach that I am describing would require you to do some work in advance, outside the callback, so that everything is ready to go without any file access, memory allocation, or other operations that are forbidden within a callback. But a sequencing app generally already knows what it's going to play well in advance, so the problem should not be that difficult. Even if you allow live manipulation of the sequence data, all it takes is a little latency between user input and data manipulation to allow the system to work smoothly.
>
> By the way, my comments above are general to CoreAudio on all platforms, not specific to the iPhone.
>
> Brian Willoughby
> Sound Consulting
>
>
> On Jan 26, 2010, at 04:11, daniel medina wrote:
>> I'm building a sequencing app for the iphone in which synchronized audio events are obviously needed. To do this i've setup an independent thread for clock generation purposes, setting its priority to 1.0 with setThreadPriority. All i have to do is specify a BPM and a desired quantization / precision, and based on that this thread will sleep for the correct amount of time. On waking up, it will send a message with semaphore_signal, so the "audio control" thread (another independent thread, not the core audio callback thread) can do its thing in sync (i.e. changing some parameters in sync with the master bpm). This audio control thread would be waiting with semaphore_timedwait(semaphore, timeout). The code would look like this:
>>
>> - (void) ClockThread {
>> NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
>> [NSThread setThreadPriority:1.0];
>> while (1) {
>> semaphore_signal(semaphore);
>> [NSThread sleepForTimeInterval:kQuantizeTime];
>> }
>> [pool release];
>> }
>>
>> -(void) AudioControl {
>> NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
>> mach_timespec_t timeout = { 10, 0 };
>> while (1) {
>> semaphore_timedwait(*semaphore, timeout);
>> // do parameter change here
>> // if timeout -> error
>> }
>> [pool release];
>> }
>>
>> The actual audio calculation / processing could happen directly in the callback, or in an independent audio processing thread, and use the callback just to copy the buffer. Again, this two threads (audio processing / callback) would be coordinated using mach semaphores (i've read in this list that they are non blocking, so you can use them in the core audio callback).
>>
>> Does this approach seems the right one if audio event synchronization is the first priority in the application? Would it be easy to lose sync? I've done some preliminary tests and it seems to work ok. I'm really a newbie in this area so maybe this approach is just plain wrong…
>
>
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden