• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Has AudioUnits actually been tried in carbon?
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Has AudioUnits actually been tried in carbon?


  • Subject: Re: Has AudioUnits actually been tried in carbon?
  • From: Bill Stewart <email@hidden>
  • Date: Sat, 29 Sep 2001 15:08:17 -0700

Maybe this question has been answered, but just in case...

On Wednesday, September 26, 2001, at 10:33 PM, Brian Barnes wrote:
1. Do you have to have a "master unit" that will mix all the other units for output to the speaker (er ... audio device.) or can one (or more) units do this by themselves?

A mixer is only used to mix some number of inputs to a single output. Its is very disimilar to the SManager in this way.

AU's are essentially processing units that do something to their inputs (if they have any) and generate output.

2. What is the "subtype" for running sampled data?

I'm not sure that the question is phrased correctly.

There's some sample code in X that I recommend you check out:
/Developer/Examples/CoreAudio
Services:
- defines examples of how to use the output unit - output units are attached to an AudioDevice, and when you start an output unit its behaviour is *defined* to drive the attached sources (whether connected units or input callbacks). In the device case, an output unit is "driven" by the I/O cycle of the device.
- this example also shows how to use the AudioConverter APIs to convert int->float data and sample rate conversion
- one of these files show how to use the AudioConverter explicitly in your code
- the other shows how to specify a non-float format as the input stream format of the OutputUnit and have the HALOutputUnit use the AudioConverter *implicitly* to do the conversion of this format to the device it outputs to.

We expect that this format conversion is an unique property of output units, but is *NOT* a generally expected feature of most audio units, where we still presume the generic format to be float32.

As part of the SDK we're preparing to provide C++ base classes to use to write Audio Units (inc. a sample unit), we'll also be publishing a doc that outlines the basic protocols of how we expect audio units to behave.

We also understand the frustration that the missing SDK is causing (as well as the lack of detailed docs) - and all I can ask for is some forgiveness and understanding as we get these things together...

A sub Type for an AudioUnit generally describes one of two things:
'out ' and 'musd' describe additional component selectors that are applicable to those classes of units
'frmt' describes a unit that will typically take different formated streams on input and output
- a user of this unit cannot assume that the format is the same (though it *may* be)
- thus processing cannot generally be done in place
- examples - interleavers, deinterleaves, sample rate converters.
'efct' describes a unit that does NO format conversion from input to output and in many cases can process the audio data in place...
- examples - delays, filters, "simple" reverbs
'mixr' kind of like a format converter but does mixing of input streams

The ID (or in Component Manager parlance the manufacturer) field, then describes an unique component


Take this code:

blah=(AudioUnit)OpenDefaultComponent(kAudioUnitComponentType, ***** );

What do you want to do? - that's what determines the **** and the missing #### (for the ID field)

The examples in the
Java/MIDIGraph
Java/SMFPlayer

directories in 10.1 show how to construct a graph

Java/FindUnits
shows how to get a list of all the audio units currently found in the system

(Yes - we also know that the CoreAudio-JAVA does *not* work currently with a Cocoa-Java app - but that will also be addressed initially in the SDK as well...

What goes in ****? What is the subtype I should use?

As a matter of fact, what do half those types do???

3. What properties are *required* to be set before starting an audio unit?

That depends on the unit - this is an area where we need more documentation, but we're happy to answer specific questions on this list in lieu of that.

The only one I set right now is "kAudioUnitProperty_SetInputCallback". Are others required?

This only needs to be set if you're not connecting an audio unit up to provide input for another one (and an AU can have both a connected unit AND an InputCallback to provide input on *different* input buses if that is desired...

Bill

And that's just the top of my list of questions! Hence, the need for sample code! I saw the CASoundLab sample code, and it says "audio unit version coming", I could use anything. Even in cobol!

[>] Brian
_______________________________________________
coreaudio-api mailing list
email@hidden
http://www.lists.apple.com/mailman/listinfo/coreaudio-api


mailto:email@hidden
tel: +1 408 974 4056
__________________________________________________________________________
Cat: "Leave me alone, I'm trying to nap... Ah, What's that clicking noise?"
__________________________________________________________________________


References: 
 >Re: Has AudioUnits actually been tried in carbon? (From: Brian Barnes <email@hidden>)

  • Prev by Date: Re: USBDeviceOpen eventually returns kIOReturnExclusiveAccess
  • Next by Date: Re: More Audio Unit Questions
  • Previous by thread: Re: Has AudioUnits actually been tried in carbon?
  • Next by thread: Re: Has AudioUnits actually been tried in carbon?
  • Index(es):
    • Date
    • Thread