Well, since you seem to be quoting my code…
Among the reasons for Core Audio being in C:
* At the time it was created (early OS X era… 10.2, I think?), it needed to support both Carbon (C++) and Cocoa (Obj-C) development. Procedural C is a great way to do that.
* Performance matters, particularly when your code is being executed hundreds of times a second. Long term, nothing short of assembly language has topped C for performance. Not that Obj-C is necessarily bad in the respect -- thank goodness it's not a VM or scripting language -- but message-dispatch does have a cost.
* On that point, notice mow many other low-to-mid level media APIs in the world -- QuickTime, OpenGL, OpenAL, OpenMAX, etc. -- are also in C. Notice also how the high-level parts of AV Foundation are Obj-C, but the low-level stuff that gets into the nitty-gritty of data buffers is in C (the Core Media framework). Smart people developed that stuff, and I have little reason to presume I know better.
* Over the last 30 years, C has never not been a top 5 language. It used to be assumed that any competent programmer knew C. Nowadays that's no longer the case -- you can have a substantial career doing deep work using only server-side scripting languages and maybe Java -- but C is still hugely popular. This year, C once again became the world's top programming language on the Tiobe ranking, thanks in part to its usefulness in mobile.
A lot of people seem to be afraid of C and are determined to put a wrapper around Core Audio. If you feel you really need to do this, look at Alex Wiltschko's Novocaine -- http://alexbw.github.com/novocaine/ -- because he knows what he's doing and a lot of people are happy with it.
Personally, I like C, but more than that, I believe in what I'd call "idiomatic programming": Core Audio has a great deal of internal consistency and predictability, and I like my code around that to hone to those conventions… the same way that I want my Obj-C code to look like other people's Obj-C code and not like, say, Java. Often, I'll have Core Audio structs as properties in an Obj-C class, and then the getters/setters or utility methods around those will be mostly C calls, so that way a caller to my class only sees Obj-C, but inside it I'm still writing plain C in a way that other Core Audio programmers would understand and could maintain. This does take a little savvy about understanding just how C relates to Obj-C -- like how you can mix C functions anywhere you like in an Obj-C class (as long as you're aware that they know nothing about the object's state, i.e., its ivars or properties), or how any Obj-C object is a pointer and is therefore perfectly valid as an inUserData pointer in Core Audio APIs that use callbacks.
--Chris
Sent from my iPad
Hi all,
I've noticed that a lot of Core Audio code is in C++/C and not Objective C.
Is there any reason for this, and should I do the same?
For example, the simple code below to handle AUGraph is in C,is there any reason for me to do the same?
Thanks.
Pier.
void CreateMyAUGraph(MyAUGraphPlayer *player)
{
// create a new AUGraph
CheckError(NewAUGraph(&player->graph),
"NewAUGraph failed");
// generate description that will match out output device (speakers)
AudioComponentDescription outputcd = {0};
outputcd.componentType = kAudioUnitType_Output;
outputcd.componentSubType = kAudioUnitSubType_GenericOutput;
outputcd.componentManufacturer = kAudioUnitManufacturer_Apple;
(continued)
_______________________________________________Do not post admin requests to the list. They will be ignored.Coreaudio-api mailing list (email@hidden)
This email sent to email@hidden