Re: Reflector Driver Again
Re: Reflector Driver Again
- Subject: Re: Reflector Driver Again
- From: Jeff Moore <email@hidden>
- Date: Tue, 6 Dec 2005 15:05:48 -0800
On Dec 6, 2005, at 2:30 PM, Kevin Kicklighter wrote:
Thanks for the response Jeff.
Issue #1:
I am using IOLog in the clipOutputSamples() and my complete sample
data does not get printed out.
My Code:
for (int i=0;i<theNumberSamples;i++) {
IOLog("%x ",theTargetBuffer[theFirstSample + i]);
}
This is called under the inFormat->fBitWidth of 32 case on the
switch, each time through, it is in LE format which.
This is not going to have the result you think. IOLog is not the same
thing as fprintf or syslog. It has a very small buffer that is very
easy to overflow when you do what you are doing. I imagine that you
are not seeing all the results. Worse, making this call in the place
you are doing it will cause timing issues. You really ought to not
call IOLog at all for this sort of task.
Better to use something like FireBug or what not that monitors using
something that is a tad more reliable.
Issue #2:
Can you please verify that my understanding is correct in the
following:
I should be able to play music, via iTunes, NSSound (Cocoa)
classes, DVD Player or whatever to the ARDevice (as selected by the
system preferences sound option), and the AudioReflector logic
takes this output (then calls clipOutputSamples) and reproduces it
as an input for other programs to read. Some portion of the
AudioEngine takes theTargetBuffer from clipOutputSamples and feeds
it into the convertInputSamples()
You are mostly correct. Your last statement is false though. The
reflector driver works because the input stream shares the exact same
buffer as the output stream. Nothing actually copies the data from
output back to input. It just happens.
I've set up my ComplexPlayThru to read from the ARDevice (input and
my ARDevice does show up in the drop downs) and play it to the
output device of "Built-in Audio". Should I be able to hear the
audio? I don't and think that I should.
Depends, but normally I would expect it to be heard. However,
ComplexPlayThru is sensitive to timing. It is easy to get it into a
state where it is out of synch and doesn't thru the data. Plus, the
reflector driver's timing is a tad on the shaky side (recall that
this is sample code for getting driver writers bootstrapped, not some
kind of half baked IPC mechanism), so it could easily be the case
that it doesn't get heard.
Issue #3:
I'm wondering if this isn't all related to my AudioReflectorDriver
not compiling originally.
Back on November 21 I sent an email to the list about it not
compiling, below is text from the error.
ARTimeStampGenerator.cpp at line 80, the statement clock_get_uptime
((AbsoluteTime*)&mStartTime); should not have the argument casted,
removing the cast fixes it.
That is the proper fix given the header set you have on your machine.
I have the same fix here on a few of my machines, although I have yet
to figure out what pattern of installs resulted in the headers that
need the cast versus the ones that don't.
I got it to compile, it would run, I would see log messages in the
system.log but the device would not show up in the System
Preferences Sound option. So after poking around found that the
difference in the PhantomAudioDriver and the AudioReflectorDriver
was that the AudioReflectorDriver's Info.plist did not have 2
fields NumBlocks and BlockSize, which I set to 32 and 512
respectively, and then it showed up in the DeviceList. It's weird
b/c it should have worked in AREngine::init(). I didn't debug it.
The Sound Prefs panel only shows devices that can be set as the
default device. The reflector driver specifically says that it can't
be the default device. Ergo, it isn't going to be displayed by the
Sound Prefs panel. It also can't directly be used by iTunes either
since it can't be the default device.
BTW, this is controlled by the call to setDeviceCanBeDefault() in
ARDevice::initHardware().
Issue #4:
I am using a WAV file that is a 16 bit, LE, 44100 Hz, Mono file.
When the Reflector called clipOutputSamples() it set the inFormat-
>fNumChannels to 8. Now, this was one of three formats (actually
the last one) that defined in the Info.plist. But what is
confusing is that this inFormat->fNumChannels is used in
calculations about the layout of the data. What it did was
duplicate my samples (each in their own 32 bit float). My question
is why did the Engine do that since the original format was for a
single channel?
Basically, when the stream says it has 8 channels, it has 8 channels
regardless of how many channels a given app is using. This is hidden
from most apps since they use AUHAL for dealing with the device. You
can tell AUHAL that you are giving it 1 channel and it will put 0's
in the other channels.
--
Jeff Moore
Core Audio
Apple
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden