• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Write a sound file with sounds generated within the app
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Write a sound file with sounds generated within the app


  • Subject: Write a sound file with sounds generated within the app
  • From: James Udo Ludtke <email@hidden>
  • Date: Tue, 6 Dec 2005 23:49:12 -0500

This post is a follow-on to the post: Get selectRecordInput to show output channels in "input" pop-up

Here are the details of what I want to do
-------------------------------------------------------

My Tinnitus retraining software (http://www.vavsoft.com) generates sound sequences that help Tinnitus sufferers to reduce and eliminate their Tinnitus sounds. A user adjusts the sound sequences to suit his type of Tinnitus, and then listens to the sound sequences generated by the software using head phones. A single training session lasts ten minutes. Initially a user listens to three training sessions during each day, in the morning, at noon, and in the evening. In the morning and in the evening a user can use his computer. During the day, and under special circumstances, such a business trip, this is not always possible. Several users suggested that I include in my software the ability to write selected sound sequences to a file. This would allow a users to load the sound file into their iPods, and then listen to training sessions while away from their computers.

Code details of the existing sound generator
--------------------------------------------------------------

// ===== Initialisation code, without comments and error trapping code =====

count = sizeof(device);
err = AudioHardwareGetProperty (kAudioHardwarePropertyDefaultOutputDevice, &count, (void *) &device);


    // error trapping here

count = sizeof(deviceBufferSize);
err = AudioDeviceGetProperty(device, 0, false, kAudioDevicePropertyBufferSize, &count, &deviceBufferSize);


    // more error trapping here

count = sizeof(deviceFormat);
err = AudioDeviceGetProperty(device, 0, false, kAudioDevicePropertyStreamFormat, &count, &deviceFormat);


// more error trapping here

// ===== Tone generating function, without comments and error trapping code =====


    def = (sinewavedef *)self;

    err = AudioDeviceAddIOProc(device, appIOProc, (void *) def);

    // more error trapping here	

    err = AudioDeviceStart(device, appIOProc);

    // more error trapping here	


// ===== Audio processing callback, without the code to set frequency, tone duration, amplitude, etc. =====


OSStatus appIOProc (AudioDeviceID inDevice, const AudioTimeStamp* inNow,
const AudioBufferList* inInputData, const AudioTimeStamp* inInputTime,
AudioBufferList* outOutputData, const AudioTimeStamp* inOutputTime,
void* defptr) {


	sinewavedef* def = defptr;

	double phase = def->phase;
	double amp = def->amp;
	double pan = def->pan;
	freq = def->freq;

double ampz = def->ampz;
double panz = def->panz;
double freqz = def->freqz;

// more declarations go here

int numSamples = def->deviceBufferSize / def- >deviceFormat.mBytesPerFrame;
float *out = outOutputData->mBuffers[0].mData;


	// code to manipulate sweep, phase, ampz and panz goes here

	float wave = sin(phase) * ampz;	// generate sine wave

*out++ = wave * (1.0-panz); // left channel
*out++ = wave * panz; // right channel

}

Because, as I learned, a file can only be written from CoreAudio from an input device, I obviously must get the sound sequences I generate into an input device. Several approaches occurred to me:


1. It might be possible to set up a default input device, and the change the default input device buffer address to be the same as the default output device buffer address. However, I really do not know enough about the similarities and dissimilarities of the input and output device to know if that could be done. (Might also stretch my coding ability beyond its limits.)

2. To repeatedly copy the output buffer content to the input buffer, but this would use a lot of processor time. (Coding will likely also be more complex than option 1.)

3. To modify the existing callback procedure to write to the input buffer in addition to the output buffer, which is done now. (This might be simplest approach, assuming it can be done.)

4. When writing a tone sequence to a file, the user does not really have to monitor the sound. I could provide some kind of progress indicator (which I should do anyhow) to keep the user informed that recording is going as planned. In this case I could code an alternate callback procedure, which writes only to the input device--if it is possible to write to an input device. Since I directly write directly into the buffer, this should be possible.

Looking back on what I just wrote, my preferences would be 4, followed by 3, then 1, and as the last alternative 2.

So my question is: Can 4 be done? If not, which other alternative, including ones I did not think of, would be the best approach.




_______________________________________________ Do not post admin requests to the list. They will be ignored. Coreaudio-api mailing list (email@hidden) Help/Unsubscribe/Update your Subscription: This email sent to email@hidden
  • Follow-Ups:
    • Re: Write a sound file with sounds generated within the app
      • From: Brad Ford <email@hidden>
  • Prev by Date: Question about using AudioUnits
  • Next by Date: Re: Write a sound file with sounds generated within the app
  • Previous by thread: Re: Error in AudioUnitRender()
  • Next by thread: Re: Write a sound file with sounds generated within the app
  • Index(es):
    • Date
    • Thread