Re: Synchronizing with the actual USB device audio clock
Re: Synchronizing with the actual USB device audio clock
- Subject: Re: Synchronizing with the actual USB device audio clock
- From: Torrey Holbrook Walker <email@hidden>
- Date: Thu, 22 Dec 2005 10:30:44 -0800
Hi Philippe,
Last time I meant the USB 1.0 audio class specification (not 1.1, which is the latest full speed spec release). When I say USB 2.0 audio specification, I mean the yet-to-be-released 2.0 audio class spec, which specifically contains a number of changes to support high speed audio devices. Sorry about the confusion. but if you want to implement something like this in a class-compliant manner, the best advice I can give is to provide a hardware sample rate switch/knob. The expected behavior is that when the switch is moved from, say, 44.1 kHz to 48 kHz, the device will drop off the bus and re-enumerate with a new descriptor specifying 48 kHz as the only valid sample rate for input and output.
That's a good idea. I'll try it. The point is then how will behave an audio app if the device disconnects and reconnects while it may be in use?
That depends on the audio application. If people are using a digital audio workstation, you don't really expect them to go around changing the sample rate with reckless abandon. Generally people set their equipment up for their session, launch the app, and start working. Results will vary from application to application. If you want to know how the software will react to the device dropping off the bus and re-enumerating, just hotplug it while the app is running. If the results are unacceptable, you don't have to kick the device off the bus when the sample rate is switched. However, the sample rate shouldn't change until the user physically detaches and reattaches the device. With this documented, at least the user won't be surprised by the results.
It seems that the Apple driver implements the "explicit" feedback endpoint synchronization method. I suppose that some buffering is needed at device level. Any idea of the amount of buffering (it has an impact on the latency of the system and it's an important point for us)?
Unfortunately I don't have any suggestions about how much buffering will be required for your device. Beside the use of a "feedback endpoint", the standard points on the possibility to use "implicit feedback" provided that the device complies to some "constraints" on its endpoints (satisfied by my device). If I have correctly understood, the principle is to deduce the data rate to use on the OUT stream from the data rate measured on the IN stream. Hence my question: does the AppleUSBAudio driver implements this "implicit feedback" algorithm?
AppleUSBAudio doesn't do anything special in this case. If there is no feedback endpoint, It doles out the average number of samples for the sample rate every frame unless an extra sample's worth of data has been accumulated, in which case it sends one more (i.e., at 44.1kHz it sends 44 sample frames for nine USB frames and then 45 sample frames the tenth). Any implicit feedback will have to be handled at the device level. AppleUSBAudio will make the samples available at the nominal rate as dictated by the USB clock, and the device can play them whenever it sees fit. /thw
--------------------
Torrey Walker CPU Audio Software Team Apple Computer, Inc.
|
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden