Re: Resampling
Re: Resampling
- Subject: Re: Resampling
- From: email@hidden
- Date: Fri, 13 Mar 2009 22:27:53 -0700 (PDT)
I think I'm missing something. If for every input I receive I fill a buffer and ask for less data than what's in the buffer, what do I do with the rest? Keep it around until the next input arrives? But if I always ask for less frames than what I just received, I will gradually accumulate a growing delay, unless at some point I call AudioConverterFillComplexBuffer twice. Or should I keep track of how much left-overs I have, so that I can ask for a different amount each time I call AudioConverterFillComplexBuffer? Alternatively, should I be calling AudioConverterFillComplexBuffer from a different thread, so that I can call it more often than I receive input?
----- Original Message ----
From: Brian Willoughby <email@hidden>
To: email@hidden
Cc: CoreAudio API <email@hidden>
Sent: Friday, March 13, 2009 5:17:21 PM
Subject: Re: Resampling
When you're dealing with a real-time sample rate conversion, you cannot simply connect things end-to-end without some kind of buffering (i.e. latency). Just pick a buffer size that you're happy with - maybe even make it configurable - and then fill that buffer from your input. Then, when calling the AudioConverter, just make sure you ask for less data than the buffer holds. In fact, you could simply hand the entire buffer over to the AudioConverter with each call, even if it asks for less, but you'll need a dual buffer system so that you can be filling one with input data while the AC works with the other. In other words, you'll always be able to just store whatever number of frames you currently have from the input, provided that your buffers are large enough.
You're looking at the one situation where CoreAudio is more difficult than the intuitive approach might suggest. Older audio systems are push models, which make recording to a file very simple because you simply push all the data you have to the file as soon as you get it. CoreAudio is a pull model, though, so that makes this a bit convoluted. However, the timing of file writes is not so critical - in fact, you should be able to tolerate a very long latency - and CoreAudio's pull model works way better for audio output devices where timing is critical. This second paragraph doesn't really help you solve your problem, but I hope it explains why this isn't any simpler.
Brian Willoughby
Sound Consulting
On Mar 13, 2009, at 16:30, email@hidden wrote:
Great, I'm starting to understand how this works... :-)
I am also trying to convert input data (recording) from 44.1 to 16KHz, and the problem here is that I don't know how many 16KHz frames I will need. After all, the recording callback gives me a fixed number of 44.1KHz frames (512), but the AudioConverterFillComplexBuffer function expects me to say how many 16KHz frames those will turn into.
If I always as for 185, the converter will eventually store more and more leftovers. If I always ask for 186, the converter will call me multiple times (based on what Doug said below) but I won't have anything to provide it.
How can I simply convert whatever number of frames I currently have?
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden