Sorry to catch this a bit late but I thought I’d add my two cents having been through this learning curve quite recently...
Personally, I feel that all the C stuff is really due to age rather than efficiency. OpenGL dates back to 1992. OpenAL is specifically modelled on OpenGL and C was then, and probably still is now, the most universal language wrt different OS’s and library compatibility. In Cocoa, It’s seems to me to be analogous to CoreGraphcs versus UIKit for drawing and animations. The later is slowly implementing the features in an OO way that were originally supported by CG____() functions. It’s just there’s no CoreAudio equivalent yet.
The main (only?) place where efficiency is of such importance that an Obj-C message call should be avoided is in realtime processing like in your Render Callback. Since Obj-C is dynamic, methods are resolved to function addresses at runtime. This lookup is cached which means a memory allocation for first time calls which means potentially a page fault which would mean a disk read. Such time-unbounded operations eventually lead to drop outs in audio even though on the whole message calls are even faster than C++ virtual methods: http://www.mikeash.com/pyblog/performance-comparisons-of-common-operations-leopard-edition.html
C++ gives you some OO cleanliness without the dynamic overhead. It has a few key feature that I find super-useful. The first is ‘inlining’ functions. Putting the keyword ‘inline’ at the beginning of your method declaration instructs the compiler not to do a normal function call but instead to insert the method’s actual code whenever it is called - kind of like a #define but without the ugliness.
The second is standard template libraries (the STL), particularly vector<>. This lets you add some OO goodness to buffers without sacrificing speed. Behind the scenes it’s just a C pointer and the methods are all inlined. Here’s the API http://www.cplusplus.com/reference/stl/vector/. Also, delving a little deeper, C++’s operator overloads let you do some neat tricks. For instance you can create a class that acts like a type (eg. AtomicBool), and wraps all getter, setter operations in OSAtomic_() ops. Then you can handle thread-safe communications between your Render Callback and your main thread in an elegant way without cluttering up your code. Have a look https://github.com/Club15CC/Marshmallows/blob/master/Source/AudioMarshmallows/Private/AUMAtomicType.h
> That said, nothing prevent you to write High-level Obj-C wrappers for operations that do not require real-time constraints.
I agree, especially for code which sets up the graph. Compare this to your graph’s setup code ;) https://github.com/Club15CC/Marshmallows/blob/master/Project/AUMTesterViewController.m Warning: It’s all a bit alpha at this stage...feel free to contribute!
Gratefully,
Hari Karam Singh
From: coreaudio-api-bounces+harikaram=email@hidden [mailto:coreaudio-api-bounces+harikaram=email@hidden] On Behalf Of Jean-Daniel Dupas
Sent: 29 October 2012 13:20
To: Jack Nutting
Cc: email@hidden
Subject: Re: iOS - Question on AUGraph in objective C / CPP
Le 29 oct. 2012 à 13:58, Jack Nutting <email@hidden> a écrit :
On Mon, Oct 29, 2012 at 1:20 PM, Chris Adamson <email@hidden> wrote:
* On that point, notice mow many other low-to-mid level media APIs in the world -- QuickTime, OpenGL, OpenAL, OpenMAX, etc. -- are also in C. Notice also how the high-level parts of AV Foundation are Obj-C, but the low-level stuff that gets into the nitty-gritty of data buffers is in C (the Core Media framework). Smart people developed that stuff, and I have little reason to presume I know better.
I agree with pretty much all of this, with maybe one exception: creating an audio graph. Setting up an audio graph requires a lot of really repetitive code.
That's why Apple provides a C++ wrapper for such CoreAudio API. The CoreAudio SDK provides a lot of helpful glue code that can be reused in your app.
Unfortunately, the CoreAudio SDK documentation has always been sparse and incomplete, and the provided C++ API is not always consistent (especially error handling. Sometime it uses exception, sometimes it returns error code).
About Obj-C for CoreAudio, message dispatching is usually fast, but the problem is that you cannot have any guarantee about what appends under the hood, especially when there is a method lookup cache miss. In such case, the runtime fallback to a slow code path, that can even call allocation methods, and this is something you don't want to do on a real-time thread.
That said, nothing prevent you to write High-level Obj-C wrappers for operations that do not require real-time constraints.