Re: seeking with aqofflinerender
Re: seeking with aqofflinerender
- Subject: Re: seeking with aqofflinerender
- From: William Stewart <email@hidden>
- Date: Thu, 16 Jul 2009 19:58:51 -0700
the time for audio queue, is the current sample that is being
rendered, where zero is the first sample you render when you start an
audio queue. As you are calling AQRender yourself, this is of course
the count of the number of samples you have rendered
When you provide buffers you are providing some number of packets and
there are some number of samples represented in each packet - for
linear pcm this is 1 sample / packet.
That should be a reasonably straight forward calculation, to go from
your input side to your output side
Just out of curiousity - what are you using AQOfflineRender to do?
Bill
On Jul 16, 2009, at 10:57 AM, Danny Sung wrote:
((this didn't seem to go through the first time, so I'm resending.
sorry
if it's a dup))
So I'm using the aqofflinerender on the iPhone, to do some audio
manipulations before sending it to a play queue. And now I'm trying
to
figure out how to seek. It's easy enough to give the
AudioFileReadPackets() call the packet you want to read from. But my
problem is I don't know how to correlate what's currently playing with
what I'm currently rendering (due to the various buffers).
Do I just have to keep track of timestamps and when various events
took
place, then calculate based on time? It'd be nice to have a method
that
was a bit more accurate.
Anyone have any suggestions on how I should go about this?
Thanks,
Danny
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden