• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
getting to grips with coreaudio
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

getting to grips with coreaudio


  • Subject: getting to grips with coreaudio
  • From: David Plans <email@hidden>
  • Date: Wed, 20 Oct 2010 13:21:54 -0400

Hello all

I'm completely new to programming in coreaudio, and whilst I've spent the last few days reading docs and trying to get to grips with Objective-c (I've learnt some Java before), I'm hitting stumbling blocks. Namely:

I'm trying to compile a bundle (that I can use in the Unity game engine from c#) that will listen to incoming audio and deliver basic pitch tracking.

I've successfully got an existing AudioInput bundle to listen to audio in, and pass peak and average power values through. Now I'm trying to incorporate this library:

http://github.com/alexbw/iPhoneFFT

into the bundle. In the following function below, having declared:

static OSStatus ioProc(AudioDeviceID inDevice, const AudioTimeStamp* inNow, const AudioBufferList* inInputData, const AudioTimeStamp* inInputTime, AudioBufferList* outOutputData, const AudioTimeStamp* inOutputTime, void* inRefCon)

I now have, after an (id)init and a release/dealloc pair:

- (int)startListening
{
	OSStatus err;

	err = AudioDeviceStart(deviceID, ioProc);
	listenCount = listenValue = 0;

	// ooura ios implementation by Alex Wiltschko
	// 1. First initialize the class
	// can we know the size of bufferSize in advance? its scope seems limited to readAudio
	OouraFFT *myFFT = [[OouraFFT alloc] initForSignalsOfLength:bufferSize*2 andNumWindows:kNumFFTWindows];
	CAShow;
	return err;
}

Which I hope sets up the Ooura fft correctly...then I call it with this function (notice functionWeHaventWritten, where I would put audio into bins and hopefully get peak frequency...):

- (void)readAudio:(AudioBufferList *)inBuffer
{
	float* input = (float*)(inBuffer->mBuffers[0].mData);
	int bufferSize, i;

	bufferSize = inBuffer->mBuffers[0].mDataByteSize / sizeof(float);
	for (i=0;i<bufferSize;i++)
		listenValue += input[i] * input[i]; // square of the sample value
	listenCount += bufferSize;

	for (i=0;i<bufferSize;i++) {
		myFFT.inputData[i]=(double)input[i];
	}

	[myFFT calculateWelchPeriodogramWithNewSignalSegment];

	functionWeHaventWritten(myFFT.spectrumData);

}

PROBLEMS:

-How to be have sure that myFFT has inputData exactly as large as bufferSize? Should I try to use CAShow with void *ioProc to see what's going on?
-How do I know the size of bufferSize anyway? 1024? 2048? Since I don't want to instantiate myFFT within readAudio (as I imagine that would be inefficient), I would need to (as I try now) create the instance of myFFT in startListening and make sure the size of bufferSize is the same as it in that function?
-What would be the best way to go about putting the result into bins to get peak frequency after that?

I apologize in advance if I haven't done enough RTFM, but I'm quite lost and Alex (the developer of this Ooura FFT implementation) pointed at the collective sage wisdom in this list...

Best,

David Plans


 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:

This email sent to email@hidden

  • Follow-Ups:
    • Re: getting to grips with coreaudio
      • From: tahome izwah <email@hidden>
  • Prev by Date: Re: hiccups in AUGraph
  • Next by Date: Re: MultiChannel Mixer on Snow Leopard, no errors but no sound
  • Previous by thread: Re: MultiChannel Mixer on Snow Leopard, no errors but no sound
  • Next by thread: Re: getting to grips with coreaudio
  • Index(es):
    • Date
    • Thread