• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Best approach for a clock source
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Best approach for a clock source


  • Subject: Best approach for a clock source
  • From: daniel medina <email@hidden>
  • Date: Tue, 26 Jan 2010 13:11:34 +0100

Hello,

I'm building a sequencing app for the iphone in which synchronized audio events are obviously needed. To do this i've setup an independent thread for clock generation purposes, setting its priority to 1.0 with setThreadPriority. All i have to do is specify a BPM and a desired quantization / precision, and based on that this thread will sleep for the correct amount of time. On waking up, it will send a message with semaphore_signal, so the "audio control" thread (another independent thread, not the core audio callback thread) can do its thing in sync (i.e. changing some parameters in sync with the master bpm). This audio  control thread would be waiting with semaphore_timedwait(semaphore, timeout). The code would look like this:

- (void) ClockThread {


	NSAutoreleasePool *pool;

	pool = [[NSAutoreleasePool alloc] init];


	[NSThread setThreadPriority:1.0];


	while (1) {

		semaphore_signal(semaphore);

		[NSThread sleepForTimeInterval:kQuantizeTime];

	}

	[pool release];

}


-(void) AudioControl {


    NSAutoreleasePool *pool;

    pool = [[NSAutoreleasePool alloc] init];



	mach_timespec_t timeout = { 10, 0 };

	while (1) {

		semaphore_timedwait(*semaphore, timeout);

		// do parameter change here

                // if timeout -> error
	}

	[pool release];

}

The actual audio calculation / processing could happen directly in the callback, or in an independent audio processing thread, and use the callback just to copy the buffer. Again, this two threads (audio processing / callback) would be coordinated using mach semaphores (i've read in this list that they are non blocking, so you can use them in the core audio callback).

As an output i would use the remoteIO unit, because of its lower latency .

All UI related things would be done in the main cocoa thread, with reduced priority.

Does this approach seems the right one if audio event synchronization is the first priority in the application? Would it be easy to lose sync? I've done some preliminary tests and it seems to work ok. I'm really a newbie in this area so maybe this approach is just plain wrong…

Thanks in advance for your input,

daniel

 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:

This email sent to email@hidden

  • Follow-Ups:
    • Re: Best approach for a clock source
      • From: Brian Willoughby <email@hidden>
  • Prev by Date: Re: Passing Data/ Objects from AUEffectBase to the Kernel
  • Next by Date: Re: Best approach for a clock source
  • Previous by thread: Re: Input To SubGraph
  • Next by thread: Re: Best approach for a clock source
  • Index(es):
    • Date
    • Thread