Re: Using Aggregate Devices (created programatically) with an AUGraph
Re: Using Aggregate Devices (created programatically) with an AUGraph
- Subject: Re: Using Aggregate Devices (created programatically) with an AUGraph
- From: Jeff Moore <email@hidden>
- Date: Tue, 11 Jul 2006 11:29:28 -0700
Creating and/or configuring an aggregate device does not involve any
asynchronous operations. Pretty much everything happens during the
individual API calls. However, creating/configuring an aggregate does
involve sending notifications about what changes with each API call.
Generally, this is only going to be aggregation related properties,
but if you are changing the master device it might trigger a format
change if the devices in the aggregate have differing sample rates.
(Recall that the master device determines the sample rate for the
aggregate.) Configuring an aggregate can also result in the stream
layout of the aggregate changing which again triggers notifications.
Generally, all of these notifications are going to fire from the same
thread that is making the HAL API call that is changing the aggregate
device prior to that call returning. Perhaps you have some code that
isn't ready for these changes and is making the AUGraph not be set up
correctly?
On Jul 10, 2006, at 10:09 PM, Neil Clayton wrote:
Hi,
I've recently got an aggregate device setup, and working. What
I'm trying to do is monitor the volume level of any given audio
input device (mic, etc). I'm using an AUGraph to do this, and
sampling the levels of the mixer, as shown in ComplexPlayThru.
I'm using Aggregate Devices because I want the monitoring code to
work with any device (not just full duplex devices that have an in
and out).
All is well if I either manually or programatically create the
Aggregate, then start my program and monitor the levels. If I
create the aggregate programatically and then straight away attempt
to monitor the input levels (that is, the monitoring comes directly
after I setup the delegate) - I never see any audio levels.
If I comment out (as I have done here) the setupAggregateToUse,
then the code works fine. If I setup the aggregate directly before
I create the graph, it never works. There appears to bee some
asynchronous setup going on with the aggregate, that is occurring
after the AUGraph is setup, that invalidates that graph or it's
settings. I found this by reducing my monitor method to (the
orig. version is included below):
- (void) monitorDevice:(MTCoreAudioDevice*)aDevice {
[self setupAggregateToUse:aDevice];
[NSTimer scheduledTimerWithTimeInterval:1 target:self
selector:@selector(go:) userInfo:aDevice repeats:NO];
}
after the call to setupAggregatetoUse, and shoving the rest of the
method into the 'go' method.
How would I work around this correctly? I don't want to have a 1s
pause in there, as it seems just ... wrong. It looks like I should
intercept some event from CoreAudio (probably to do with the
aggregate changing in some way), and then and only then go and
setup my AUGraph.
Regards,
Neil Clayton
- (void) monitorDevice:(MTCoreAudioDevice*)aDevice {
//[self setupAggregateToUse:aDevice];
[self stopPreview];
NewAUGraph(&auGraph);
[self createHALAU];
[self createMixer];
[self connectAUs];
OSStatus err = AUGraphOpen(auGraph);
checkErr(err);
[self getNodeReferences];
[self setNumberOfBusses];
[self enableIO];
[self enableMetering];
[self setDevice:aggregate];
err = AUGraphInitialize(auGraph);
checkErr(err);
[self setVolumes];
[self startPreview];
}
And the setupAggregatetoUse method is:
AudioObjectPropertyAddress theAddress;
// After creation, setup the in and output of the audio device.
// We already have the input - we need the default output
MTCoreAudioDevice *defaultOutput = [MTCoreAudioDevice
defaultOutputDevice];
NSMutableArray *myDeviceList = [NSMutableArray new];
[myDeviceList addObject:[defaultOutput deviceUID]];
[myDeviceList addObject:[sourceDevice deviceUID]];
theAddress.mSelector =
kAudioAggregateDevicePropertyFullSubDeviceList;
theAddress.mScope = kAudioObjectPropertyScopeGlobal;
theAddress.mElement = kAudioObjectPropertyElementMaster;
BAILSETERR( AudioObjectSetPropertyData(aggregate, &theAddress, 0,
NULL, sizeof(CFArrayRef), &myDeviceList) );
NSString *masterDevice = [defaultOutput deviceUID];
theAddress.mSelector = kAudioAggregateDevicePropertyMasterSubDevice;
theAddress.mScope = kAudioObjectPropertyScopeGlobal;
theAddress.mElement = kAudioObjectPropertyElementMaster;
BAILSETERR( AudioObjectSetPropertyData(aggregate, &theAddress, 0,
NULL, sizeof(CFStringRef), &masterDevice) );
--
Jeff Moore
Core Audio
Apple
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden