Re: Iterating through audio data
Re: Iterating through audio data
- Subject: Re: Iterating through audio data
- From: Steven Winston <email@hidden>
- Date: Wed, 25 May 2011 18:06:14 -0700
The audio data is just PCM data. It's always going to boil down to
PCM data which is a byte array of some form. Representation of the
audio (samples/sec format of samples) is entirely specific to the
audio format that comes in. You can successfully read and write audio
without Core Audio, or even using the play logic. That can be
completely separate from this entire process.
However, if you want things to line up correctly then yes, you must
use the player and the visualizer in tandum. This is to say that I'd
personally use audio units to play audio and grab the PCM data from
the same buffer and visualize off of that. It's always *just* byte
data that have a range, so plotting the points as they scale between
the high and the low of that range will give you a very rough
visualizer.
The size of the buffer, the type of data, the samplerate etc are all
governed by the way you setup your playback.
On Wed, May 25, 2011 at 5:53 PM, Gregory Wieber <email@hidden> wrote:
> Hi GW,
> The ASBD defines the type of audio data you'd like to work with. In the ASBD
> you can specify that you would like to work with floating point values,
> ranging from -1 to 1, for instance.
> The heart of audio-processing, in Core Audio, is typically done inside a
> Render Callback function. My experience is with iOS development primarily,
> so keep that in mind.
> You're looking to access the audio data in 'real time', if I understand you
> correctly. A buffer is passed into the Render Callback function -- let's say
> it is 1024 frames long. While you read or write this buffer, another has
> already been sent off to the speakers. This cycle continues for the
> duration.
> So, what you want to do is create something like a C struct that contains an
> array. Every Render Callback you want to write the audio buffer into your C
> struct's array (clearly you don't need a struct just for the array, but I'm
> guessing it will be useful to store other information you may want to also
> access later). It's best to do all of this using pointers, so that you're
> not allocating memory during the Render Callback.
> In your app's GUI code, you can pick an interval (say 24-60 frames per sec)
> and take a look at what's in your C Struct's Array, and then plot that data
> on a graph, or render visually however you see fit.
> This is a pretty rough outline and I've probably missed some points, but I'm
> sure others here will be able to point out any errors or omissions in this
> overview.
> best,
> Greg
>
> On Wed, May 25, 2011 at 5:38 PM, GW Rodriguez <email@hidden> wrote:
>>
>> I am attempting to draw a waveform (a seemingly taboo topic), and I'm not
>> looking for tips on how to do this. I am just not sure how to get all the
>> samples/amplitudes.
>> If someone can point me in the direction, is it in the ASBD? And how do I
>> speed through and read all that data not in audio time but as fast as the
>> computer can?
>> Thanks,
>> --
>> GW Rodriguez
>>
>> _______________________________________________
>> Do not post admin requests to the list. They will be ignored.
>> Coreaudio-api mailing list (email@hidden)
>> Help/Unsubscribe/Update your Subscription:
>>
>>
>> This email sent to email@hidden
>>
>
>
> _______________________________________________
> Do not post admin requests to the list. They will be ignored.
> Coreaudio-api mailing list (email@hidden)
> Help/Unsubscribe/Update your Subscription:
>
> This email sent to email@hidden
>
>
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden