• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: AudioQueue and Seeking in a VBR file
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: AudioQueue and Seeking in a VBR file


  • Subject: Re: AudioQueue and Seeking in a VBR file
  • From: William Stewart <email@hidden>
  • Date: Wed, 26 Mar 2008 12:50:53 -0700

This is really one of the problems I think with ogg. AAC and MP3 use short and long block processing (like ogg), but they hide this within a standardised packet length (the same number of samples are represented in each aac or MP3 packet) and a constant decoder latency.

However, ogg has not done this, so the file has to either contain additional information (how many sample frames are in each ogg packet) or the reader/writer has to do additional parsing - if you don't have an externally packet table (like MPEG4 and CAF files do) then its really bad because you have to go parse through the ogg packet itself to find out how many samples it represents. This gets even more tricky to do outside of the codec stream because encoded bit streams have processing latencies, so if your ogg packets are different sizes and you have a run of these different size packets, does the latency of your decode change as a result of that? I don't know enough about these niceties of ogg to help you with that, but that is some of the information that you need to find out about.

As far as the core audio APIs go - we have all of the structures, etc, in the API that I think you will need to deal with this. Packet descriptions have entries for both bytes and frames per packet for this reason, the audio queue buffer calls have trim arguments that can be used to deal with decoder latencies, the audio queue APIs also provide support for both time stamps to be used with enqueued buffers, and API calls that can deal with either "real time" - the host time in the audio time stamp (which can be used across a number of audio queues) - or "sample time" - the sample counts for a given audio queue's particular time axis.

If it were me - I would start on this with say PCM or even IMA and get the basics of your timing model sorted out - then at least you have a starting point to deal with the complexities of VBR formats. CAF files can also contain any of these audio formats as well, so if you want to prototype with a file format that should have the information you need, you can do that I think.

Bill

On Mar 26, 2008, at 11:33 AM, Matthew Leon Grinshpun wrote:
Sorry, I was hasty in writing this e-mail. Here is a corrected version:

First, thanks for everyone's help in answering my previous question
about audiofilecomponent. I'm working on something for Ogg at the
moment... I have a tangentially related concern that is bothering me a
bit:

I am working with audio queues and wondering how to properly keep
track of time, as well as seek to a certain position, in VBR files.
Basically, I'm wondering what the most commonly implemented solution
is to the problem, as I'm sure it's one that many people working with
these files face at some point.

The only obvious solution to the issue seems to me to develop a
function that iterates over the entire file and builds a "map" of the
number of frames contained in each packet. I assume I can just
allocate, say, a UInt64 for each packet and store a corresponding
frame number. Is this the way this task is generally performed, or is
there a cheaper (clever) way to get around this issue? In the case of
CoreAudio, is doing AudioFileRead() and then inspecting the packet
descriptions the best way?


On Wed, Mar 26, 2008 at 7:24 PM, Matthew Leon Grinshpun <email@hidden> wrote:

Hi,




First, thanks for everyone's help in answering my previous question about audiofilecomponent. I'm working on something for Ogg at the moment... However, I have a concern that has


I am working with audio queues and wondering how to properly keep track of time, as well as seek to a certain position, in VBR files. Basically, I'm wondering what the most commonly implemented solution is to the problem, as I'm sure it's not one that many people working with these files face at some point.


The only obvious solution to the issue seems to me to develop a function that iterates over the entire file and builds a "map" of the number of frames contained in each packet. I assume I can just allocate, say, a UInt64 for each packet and store a corresponding frame number. Is this the way this task is generally performed, or is there a cheaper (clever) way to get around this issue? In the case of CoreAudio, is doing AudioFileRead() and then inspecting the packet descriptions the best way?



-Matthew
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden

_______________________________________________ Do not post admin requests to the list. They will be ignored. Coreaudio-api mailing list (email@hidden) Help/Unsubscribe/Update your Subscription: This email sent to email@hidden
References: 
 >AudioQueue and Seeking in a VBR file (From: "Matthew Leon Grinshpun" <email@hidden>)
 >Re: AudioQueue and Seeking in a VBR file (From: "Matthew Leon Grinshpun" <email@hidden>)

  • Prev by Date: Re: I can't get breakpoints to work?
  • Next by Date: Re: I can't get breakpoints to work?
  • Previous by thread: Re: AudioQueue and Seeking in a VBR file
  • Next by thread: Re: AudioQueue and Seeking in a VBR file
  • Index(es):
    • Date
    • Thread