Re: getting to grips with coreaudio
Re: getting to grips with coreaudio
- Subject: Re: getting to grips with coreaudio
- From: tahome izwah <email@hidden>
- Date: Sun, 7 Nov 2010 14:00:07 +0100
Looks like the data you get in your ABL is in a format different from
what your FFT expects. Check the data format for both and make sure
you're not memcpying short int data into an float array.
--th
2010/11/6 David Plans <email@hidden>:
> Hi Tahome, and all...
>
> I worked on the answer below, which clarified a lot about Alex's implementation of Ooura FFT in objective-c, but now I have a new problem...I'm successfully opening the iphone mic, getting audio into a bufferList, and gdb says the array has valid audio data...I can hear myself on audio out.
>
> Here's the OouraFFT interface:
>
> http://pastie.org/1277215
>
> And the @implementation:
>
> http://pastie.org/1277217
>
> So now I now have a working call to:
>
> OouraFFT *myFFT;
>
> I cans see myFFT being created on the stack and gdb is happy, and then I do:
>
> myFFT = [[OouraFFT alloc] initForSignalsOfLength:1024 andNumWindows:4];
>
> gdb window says myFFT is being created, I have dataLength 1024, numFrequencies 512.
>
> but when I try to memcpy data from bufferList to myFFT.inputData, with:
>
> memcpy([myFFT inputData], bufferList->mBuffers[0].mData, bufferList->mBuffers[0].mDataByteSize);
>
> I get -nan(0xffffcfffbfffd) in *inputData...
>
> I've also tried myFFT->inputData, and myFFT.inputData, but -> gives:
>
> error: instance variable 'inputData' is declared protected
>
> and dot notation gives me -nan data too...
>
> am I missing something really obvious about Alex's @implementation of OouraFFT? something that's protected about inputData that my lack of knowledge of objective-c is obscuring?
>
> I would appreciate any advice at all...I think the CoreAudio part of this is working but I think I may have misunderstood how bufferLists work, or perhaps something else.
>
> David
>
>
>
>
> On Oct 19, 2010, at 1:32 PM, Alex Wiltschko wrote:
>
>> I think this is a bit beyond what I'm able to help you with... I'd ask on the Coreaudio-api mailing list that Apple has.
>>
>> Best of luck,
>> Alex
>>
>> On Oct 19, 2010, at 1:28 PM, David Plans Casal wrote:
>>
>>> Hello Alex
>>>
>>> First off, thanks for iPhoneFFT, which is a great learning tool for me (just getting stared with iOS programming). I know I should move onto vDSP, but your library is easier to understand for now...
>>>
>>> I'm doing research into music therapy and dementia and I'm trying to write an application where people will hum into an iphone/ipad, and it will return drone pitches that are approximately centered around the pitch of the humming.
>>>
>>> I'd like to build it into an AudioInput bit of code I've got, whereby:
>>>
>>> - (void)readAudio:(AudioBufferList *)inBuffer
>>> {
>>> float* input = (float*)(inBuffer->mBuffers[0].mData);
>>> int bufferSize, i;
>>>
>>> bufferSize = inBuffer->mBuffers[0].mDataByteSize / sizeof(float);
>>>
>>> Then, following your instructions, I did:
>>>
>>> OouraFFT *myFFT = [[OouraFFT alloc] initForSignalsOfLength:numFrequencies*2 andNumWindows:kNumFFTWindows];
>>>
>>> for (i=0;i<input;i++) {
>>> NSDecimalNumber* fftValue = [[NSDecimalNumber input] retain];
>>> NSMutableArray* fftValuesArray= [[NSMutableArray array] retain];
>>> [fftValuesArray addObject:fftValue];
>>> }
>>> [myFFT calculateWelchPeriodogramWithNewSignalSegment];
>>>
>>> functionWeHaventWritten(myFFT.spectrumData);
>>>
>>> As you can see from my awful objective-c, I haven't written the functionWeHaventWritten yet...all I need is the pitch centroid, but I'm unsure how to go about it.
>>>
>>> Got any advice? I mean to go about finding the frequency peaks for each frequency band, then divide all pitch strength values by the largest one, so I thought I might be able to for (int i = 0; i < 1024; i++) since I think you use 1024 and then use Math.Pow to find peaks, starting with a fundamental frequency of 0?
>>>
>>> I realize I need to stick things in bins (to get 'musical notes'), but not sure where to start!
>>>
>>> David
>>>
>>> --
>>> David Plans Casal
>>> Andrew Mellon Assistant Professor of Music
>>> Dartmouth College
>>> T: +1-603-646-3678
>>> C: +1-603-7150355
>>> E: email@hidden
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>
>
> --
> David Plans Casal
> Andrew Mellon Assistant Professor of Music
> Dartmouth College
> T: +1-603-646-3678
> C: +1-603-7150355
> E: email@hidden
>
>
>
>
>
>
>
>
>
> On Oct 23, 2010, at 11:56 AM, tahome izwah wrote:
>
>> You should make sure that your FFT size is independent of your audio
>> buffer size. Usually this is done by accumulating data until you have
>> enough to do a full transform. As most transforms are size 2^n for
>> speed reasons this is required, as you can't assume that your audio
>> buffers will always be a power of 2 in size.
>>
>> HTH
>> --th
>> _______________________________________________
>> Do not post admin requests to the list. They will be ignored.
>> Coreaudio-api mailing list (email@hidden)
>> Help/Unsubscribe/Update your Subscription:
>>
>> This email sent to email@hidden
>
> _______________________________________________
> Do not post admin requests to the list. They will be ignored.
> Coreaudio-api mailing list (email@hidden)
> Help/Unsubscribe/Update your Subscription:
>
> This email sent to email@hidden
>
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden