• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Pre-initialising AudioConverter codec decoder
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Pre-initialising AudioConverter codec decoder


  • Subject: Re: Pre-initialising AudioConverter codec decoder
  • From: William Stewart <email@hidden>
  • Date: Tue, 13 Nov 2007 11:10:48 -0800


On Nov 12, 2007, at 7:33 PM, Tim Hewett wrote:

I am implementing a plugin codec and also an application which will use it via AudioConverter.

It is all working except that I am trouble seeing how to have the app tell the decoder the packets size. of the incoming stream when it is the decoder which has the ability to work that out from each packet header.

The stream uses fixed size packets for a given encoding bitrate (which doesn't change while the stream is running), so I am trying to avoid using packet descriptions to keep the complexity down.

When the encoder generates the bit stream, you are asked to provide an ASBD that describes the output format. Instead of providing a zero in the bytes per packet field, you could try to provide a fully specified ASBD that presents the CBR nature of the bit stream you are going to produce based on the codecs current settings.


Then, when the data is put in a file, or fed to a decoder, or streamed, etc, it is streamed as a CBR bit stream. Your encoder has the responsbility then of absolutely ensuring that each packet is always that many bytes big (no smaller or larger)

That's the only way I could see this working.

But, regardless, the packet descriptions are really not that much of a problem, and the benefit they provide in being able to allocate bits intelligently (more bits for harder passages, less for easier ones, barely any for silence) far outweighs the cost of supporting them.

In any case it is the decoder which understands the framing structure of the stream so I would prefer it to handle all that internally.

yes, but its not just the encoder, it is all the files, streams, etc that have to deal with this as well.


The problem is that AudioConverterFillComplexBuffer() and its associated ComplexInputDataProc() operate on whole packets and it seems that the packets length has to be set before decoding starts. Yet it is not until the decoder has received some data that it knows what the stream parameters are.

There's no requirement of knowing the packet length - how can you if the size of the packets are variable. How the data is read and provided to you in any case, is outside the purview of the decoder, so not its concern.


It seems to be a chicken-and-egg situation and I am wondering if there is a solution. Is there a way to initially pass some data to the decoder only for analysing the stream parameters rather than producing decoded output? Or is it ok for the decoder to set these things when it decodes the first frame? For the latter it seems to me that the ComplexInputDataProc() must pass in an initial packet of the maximum size, then get the new size and rewind to the stream start..... etc.

As for this - this is what the magic cookie is for. It provides a block of "opaque" data that is specific to the format and unknown to anything else. It is the responsibility of the files, etc, to carry the cookie around with them and to provide that cookie to the decoder before it provides any data - and this would be perfect for you needs (and why this exists).


Bill



This all seems convoluted, is there a better way? I would prefer not to have to have the encoder stream the packet descriptions so as to keep the stream complexity down, and I would also prefer to keep the knowledge of internal stream data to the plugin codec rather than the application analyse the stream to setup the codec. I also like my breakfast cooked for me. Am I asking for too much, or missing something perhaps? Maybe set the packet size to one byte and let the decoder take what it wants as it runs and handle all framing internally (as it does so well)? I sense there must be a more elegant solution...
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:

This email sent to email@hidden

_______________________________________________ Do not post admin requests to the list. They will be ignored. Coreaudio-api mailing list (email@hidden) Help/Unsubscribe/Update your Subscription: This email sent to email@hidden
References: 
 >Pre-initialising AudioConverter codec decoder (From: Tim Hewett <email@hidden>)

  • Prev by Date: Re: kAudioUnitProperty_PresentationLatency supported in LogicPro 7.2 and/or 8?
  • Next by Date: Re: Leopard xcode 3, and audio-unit compile error.
  • Previous by thread: Pre-initialising AudioConverter codec decoder
  • Next by thread: Audible glitches when changing output format flags from floats to ints
  • Index(es):
    • Date
    • Thread