On Tue, Aug 4, 2009 at 11:24 AM, Jeff Moore
<email@hidden> wrote:
So, we can state with some certainty that the sample code works outside of Java. So, my guess would be that something about how the Java side of things is involved is messed up. For example, you aren't trying to call back into Java from your render function are you? That definitely wouldn't work.
I do agree, the c++ sample code is working. I made the same sine function in java, then I'm sending the buffer to
the coreaudio. It's working well, I'm
getting the same result ( I can perceive the same sound wave) than with the c++ sample.
I have two jni functions. The first one initialize the sound buffer, and the second function is playing it. I did print severals set of values in the java and the c++ side, the data are similar, I don't think that there is a problem with the data from java. As you said, It is probably due to a setting with the ASBD's.
A couple of other things that come to mind (in no particular order):
- Are all the ASBD's you are filling out correct? We fixed one already, but there may be others...
I am setting only one ASBD ( the one that we fixed ).
- Is Java rendering into the buffer in an asynchronous way?
Yes
- 470 is a very odd buffer frame size. The default is normally 512. Are you setting it differently? Why?
This is because my wave file has a framerate at 44100 and not 48000. The sample rate of the output unit is at 48000 . I thought it could have been an issue. But I did play another wave buffer that have a framerate at 48000, I still got a noisy/metallic sound.
- Are you trying to copy mono data into an interleaved stereo buffer perhaps?
Hum this might be the issue. How can I check if this is an interleaved buffer is interleaved ?
Or trying to copy interleaved data into one side of a de-interleaved stream? You might want to check the ASBD of the output format to be sure it matches what you expect, especially since most audio devices have at least two channels and your ASBD is talking about just one channel of data...
The wave file is a mono sound, PCM_signed, the framerate is at 44100, 16 bits ( I convert it into a 32 bit float in java). Right now, I have two output channels, I'm using the output of my macbook pro.
This is the asbd values of the input format :
SampleRate=44100.000000,BytesPerPacket=4,FramesPerPacket=1,BytesPerFrame=4,BitsPerChannel=32,ChannelsPerFrame=2,theFormatFlags=44
This is the asbd values of the output format :
SampleRate2 =48000.000000, BytesPerPacket=8, FramesPerPacket=1, BytesPerFrame=8, BitsPerChannel=32, ChannelsPerFrame=2, theFormatFlags=9
The result is different, but I thought that the DefaultOutputUnit was doing any format conversions to the format of the default device ?
In the call back function I'm calling the RenderSin function in that way :
RenderSin (sSinWaveFrameCount,
inNumberFrames,
ioData->mBuffers[0].mData,
sSampleRate,
sAmplitude,
sToneFrequency,
sWhichFormat,myglobalint);
Is it fine if ioData->mBuffers[0].mData is overwrite 100 times per second ? I was thinking that it would be a good idea to fully load the buffer list, then play it.
Thanks for the help.