Re: Do I really need to steer clear of Objective-C when providing data for RemoteIO?
Re: Do I really need to steer clear of Objective-C when providing data for RemoteIO?
- Subject: Re: Do I really need to steer clear of Objective-C when providing data for RemoteIO?
- From: Gregory Wieber <email@hidden>
- Date: Fri, 7 Jan 2011 08:58:33 -0800
You want your render callback code to be as efficient as possible, and that means that the overhead of calling objective-c messages can come into play. That said, objective-c messages are not always going to be your bottleneck, because it's easy to do something else in your code that's 10x's worse than using a few simple objective-c messages. But if you're unsure of the gritty details, then just avoid them altogether.
As far as sound generation systems requiring the dynamic polymorphism of objective-c -- it's best to separate UI from sound generation. Let's say you have a complex object oriented modular synth; in this scenario you could use objective-c to maintain all of the connections between the different oscillators, filters, etc, and have all of that 'flattened' into a data structure from the C programming world -- e.g., pointers to arrays of fixed length, etc.
On Thu, Jan 6, 2011 at 10:33 PM, Brian Willoughby
<email@hidden> wrote:
On Jan 6, 2011, at 19:35, Morgan Packard wrote:
Based on jhno's recent post, and some other things I've heard, I'm getting increasingly nervous about the fact that I'm writing audio code in Objective C. My app works fine currently, with a bunch of Objective-C audio code, but I'd hate to realize, a year or two from now, that I've been using the wrong tool for the job. Should I nip this in the bud and switch to using purely C++? Is there any case to be made for sticking with Objective-C?
If you can separate your ObjC code from the render callback with lock-free threaded queues, then you should be fine. Accessing files in your CoreAudio callback is worse than calling ObjC, and yet Apple is able to provide AUAudioFilePlayer with no problems. Inside that AudioUnit is a threaded queue designed so that the callback has lock-free access to the data, and it works just fine.
There could easily be some complex sound generation systems for which the dynamic polymorphism of ObjC is essential, but that doesn't mean the entire engine must (or can) run within the CoreAudio callback.
Brian Willoughby
Sound Consulting
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (
email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to
email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden