• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Song Navigation with AudioQueueEnqueueBufferWithParameters
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Song Navigation with AudioQueueEnqueueBufferWithParameters


  • Subject: Re: Song Navigation with AudioQueueEnqueueBufferWithParameters
  • From: William Stewart <email@hidden>
  • Date: Mon, 13 Oct 2008 11:52:12 -0700

not really.

There is a confusion here between the timeline that the audio queue is using (which is its playback time line - essentially how many samples since you started a queue), and what buffers you are playing back (enqueueing) at any given time.


Imagine a simple scenario of one buffer with 10,000 samples in it

You start the queue - it plays the buffer and as it plays through this first time, the audio queue time is exactly where it is in this buffer (sample at say 5,000 in audio queue is at sample 5,000 in the buffer).

Now, you re-enqueue the buffer (like a loop) - now as the audio queue is playing, it is continuing to progress, so it could have a time now of say 15,000, and is at sample 5,000 in your buffer..


Now, lets say that you get the callback again, but this time you want to wait 100,000 samples before you want the sound to play again.


So, you use enqueue buffer with params... the time stamp you provide now is going to be (120,000)
- 10,000 for the duration of the first buffer
- 10,000 for the duration of the second buffer
- 100,000 for the gap you want...


So, the queue is still playing, still counting time up, but it won't actually play your buffer until its playback head gets to the value you have provided (120,000). When it gets there it is going to play the first sample (0) of your 10,000 sample buffer... So at the audio queue time of 125,000 it is going to be at 5,000 in your buffer.

For the trim frames - this tells the queue to take samples off the beginning or the end of the buffer that you enqueue...

So, lets say that when we come back from our last one, you don't want to play those first 5,000 samples from your buffer... You use enqueue buffer with params.
inStartTime == NULL (we want it to just play back from when the previous one finished - no gap)
inTrimFramesAtStart == 5,000 (don't play the first 5,000) frames


So, the duration of this enqueuing operation of our 10,000 sample buffer is now 5,000. It will immediately play back that buffer from that trimmed offset (5,000) when it finishes the previous buffer (there is no gap)

The question then is, what is going to be the queue time after this buffer is played?

120,000
+ 10,000 for our previous queued buffer with the gap between it
+ 5,000 for the duration of this truncated buffer
--------------
= 135,000


I've just used the one buffer in this example because I am trying to make it really clear to you about the difference between the audio queue's playback time line, and the "time" that is represented by the buffers that you provide to the queue. The example works just as well with multiple buffers of course.


HTH

Bill

On Oct 11, 2008, at 12:36 AM, Ignacio Enriquez wrote:

Hi everyone.
(I have to say that I am quite new to Core Audio. and new to this list also)
I am trying to use AudioQueueEnqueueBufferWithParameters function so I
can enqueue buffers corresponding a certain time. In other words, song
navigation.
(suppose a have a GUI time line, (for example UISlider) and I want to
be able to go to any audio's position in time, so I can enqueue the
corresponding buffers to that time and enqueue them to be played -> As
far as I understand this can be accomplished by using
AudioQueueEnqueueBufferWithParameters)


For example in iPhone speakHere project > AudioPlayer.m > playbackCall function
there is


AudioQueueEnqueueBuffer (
inAudioQueue,
bufferReference,
([ player packetDescriptions ] ? numPackets : 0),
[ player packetDescriptions ]
);


but I want to have more control so I would use

UInt32 inTrimFramesAtStart, inTrimFramesAtEnd;
AudioQueueTimelineRef myOutTimeline;
AudioQueueCreateTimeline(inAudioQueue, &myOutTimeline); //is this correct?


AudioQueueEnqueueBufferWithParameters(
inAudioQueue,
bufferReference,
([player packetDescriptions] ? numPackets : 0),
[player packetDescriptions],
inTrimFramesAtStart, //???????
inTrimFramesAtEnd, //???????
0, //since I am not passing any parameter
NULL, //since I am not setting any parameter
NULL, //ASAP
&myOuTimeline, //something tells me that is not good


              );

As you can see this function's parameters are incomplete.
I have read the documentation but I still cannot get what the heck put
inside inTrimFramesAtStart and inTrimFramesAtEnd and &myOutTimeline


Can any one explain me kindly? I would be very pleased. Thanks in advance.


nacho4d _______________________________________________ Do not post admin requests to the list. They will be ignored. Coreaudio-api mailing list (email@hidden) Help/Unsubscribe/Update your Subscription: This email sent to email@hidden

_______________________________________________ Do not post admin requests to the list. They will be ignored. Coreaudio-api mailing list (email@hidden) Help/Unsubscribe/Update your Subscription: This email sent to email@hidden
References: 
 >Song Navigation with AudioQueueEnqueueBufferWithParameters (From: "Ignacio Enriquez" <email@hidden>)

  • Prev by Date: ugh, still having issues with audio queues and audio streams
  • Next by Date: Re: ugh, still having issues with audio queues and audio streams
  • Previous by thread: Song Navigation with AudioQueueEnqueueBufferWithParameters
  • Next by thread: ugh, still having issues with audio queues and audio streams
  • Index(es):
    • Date
    • Thread