• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
kConverterPrimeMethod_None not working on Tiger
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

kConverterPrimeMethod_None not working on Tiger


  • Subject: kConverterPrimeMethod_None not working on Tiger
  • From: Brian Willoughby <email@hidden>
  • Date: Sat, 22 Mar 2008 02:35:40 -0700

Hello,

I'm writing an AudioUnit which requires oversampling. I decided to ditch the upsampling and downsampling code that I have in my arsenal in exchange for the more flexible AudioConverter - especially after seeing the quality comparisons. I obviously need to set kAudioConverterPrimeMethod to kConverterPrimeMethod_None because an AudioUnit cannot seek back in time, nor can it fetch future samples. An AU must process the samples in the current buffer, and none before or after (* see below). The documentation seems to promise that AudioConverter will not ask me to pre-seek. However, when I set up a stereo 4x oversampling conversion, with an AU buffer of 512 frames, and I ask for 2048 output frames, my AudioConverterComplexInputDataProc is given a request for 516 frames. In fact, it often is asked for 4 extra frames when testing with auval. This would only make sense if I were using the Pre-seek or Normal priming methods, but I am not.

My first instinct was to report this as a bug. However, it is entirely possible that I am doing something wrong. Here is my code (which is called from Initialize(), this the kAudioUnitErr_FailedInitialization return codes):


OSStatus CreateRealTimeAudioConverter( const AudioStreamBasicDescription * inSourceFormat,
const AudioStreamBasicDescription * inDestinationFormat,
AudioConverterRef * ioAudioConverter)
{
OSStatus status = AudioConverterNew(inSourceFormat, inDestinationFormat, ioAudioConverter);
if (noErr != status)
return kAudioUnitErr_FailedInitialization;
UInt32 dataSize;
Boolean isWritable;
UInt32 primeMethod = kConverterPrimeMethod_None;
status = AudioConverterGetPropertyInfo(*ioAudioConverter, kAudioConverterPrimeMethod, &dataSize, &isWritable);
if (noErr != status || !isWritable || sizeof(primeMethod) != dataSize)
return kAudioUnitErr_FailedInitialization;
status = AudioConverterSetProperty(*ioAudioConverter, kAudioConverterPrimeMethod, dataSize, &primeMethod);
if (noErr != status)
return kAudioUnitErr_FailedInitialization;
return noErr;
}



I am testing for kConverterPrimeMethod_None using AudioConverterGetProperty later in my render, so I'm quite certain the setting was successful.


Can anyone tell me whether this feature has been tested? Should I expect it to work? Should I report a bug?


Here is the relevant documentation from the headers:


The very first call to AudioConverterFillBuffer(), or first call after AudioConverterReset(), will request additional input frames beyond those normally expected in the input proc callback to fulfill this first AudioConverterFillBuffer() request. The number of additional frames requested, depending on the prime method, will be approximately:


<pre>
kConverterPrimeMethod_Pre leadingFrames + trailingFrames
kConverterPrimeMethod_Normal trailingFrames
kConverterPrimeMethod_None 0
</pre>


Thus, in effect, the first input proc callback(s) may provide not only the leading frames, but also may "read ahead" by an additional number of trailing frames depending on the prime method.

kConverterPrimeMethod_None is useful in a real-time application processing live input, in which case trailingFrames (relative to input sample rate) of through latency will be seen at the beginning of the output of the AudioConverter. In other real-time applications such as DAW systems, it may be possible to provide these initial extra audio frames since they are stored on disk or in memory somewhere and kConverterPrimeMethod_Pre may be preferable. The default method is kConverterPrimeMethod_Normal, which requires no pre- seeking of the input stream and generates no latency at the output.


P.S. I have implemented GetLatency() to add the trailingFrames from both my upsampling and downsample AudioConverters, which should properly adjust for the through latency described above for kConverterPrimeMethod_None. Needless to say, it's disturbing that I still am asked for leading and/or trailing frames.


(*) If I were to implement my own oversampling, I could easily copy buffers around to maintain "latency" samples in addition to the buffers provided by the host, so that I could pre-roll my own sample rate conversion buffering. However, the point of using AudioConverter is to avoid all of this overhead in my code, since CoreAudio is advertised as being able to handle this for me.

Brian Willoughby
Sound Consulting

_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden


  • Follow-Ups:
    • Re: kConverterPrimeMethod_None not working on Tiger
      • From: William Stewart <email@hidden>
  • Prev by Date: Re: audio device plug/unplug
  • Next by Date: Re: [FAQ?] re>x-code template project for AU with cocoa view
  • Previous by thread: Re: audio device plug/unplug
  • Next by thread: Re: kConverterPrimeMethod_None not working on Tiger
  • Index(es):
    • Date
    • Thread