Re: Newbie trying to play mp3 files instead wav files
Re: Newbie trying to play mp3 files instead wav files
- Subject: Re: Newbie trying to play mp3 files instead wav files
- From: Murray Jason <email@hidden>
- Date: Mon, 27 Oct 2008 16:19:52 -0700
Hello Ignacio,
The next version of SpeakHere demonstrates how to record and play back
all supported compressed formats. To help you now, though, here is a
summary of the changes that provide that feature. I hope I'm giving
you enough context here so that you can compare these excerpts with
the existing SpeakHere code.
One important fact to keep in mind is that, in the Simulator, you can
record and play only uncompressed (linear PCM) audio.
// This code goes into the initializer for the audio recorder object:
// Specify the recording format. Options are:
//
// kAudioFormatLinearPCM
// kAudioFormatAppleLossless
// kAudioFormatAppleIMA4
// kAudioFormatiLBC
// kAudioFormatULaw
// kAudioFormatALaw
//
[self setupAudioFormat: kAudioFormatLinearPCM];
AudioQueueNewInput (
&audioFormat,
recordingCallback,
self, // userData
NULL, // run loop
NULL, // run loop mode
0, // flags
&queueObject
);
// This is a new method for setting up the audio recording format in
the recorder class:
- (void) setupAudioFormat: (UInt32) formatID {
// Obtains the hardware sample rate for use in the recording
// audio format. Each time the audio route changes, the sample rate
// needs to get updated.
UInt32 propertySize = sizeof (self.hardwareSampleRate);
AudioSessionGetProperty (
kAudioSessionProperty_CurrentHardwareSampleRate,
&propertySize,
&hardwareSampleRate
);
audioFormat.mFormatID = formatID;
audioFormat.mChannelsPerFrame = 1;
if (formatID == kAudioFormatLinearPCM) {
audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger |
kAudioFormatFlagIsPacked;
audioFormat.mFramesPerPacket = 1;
audioFormat.mBitsPerChannel = 16;
audioFormat.mBytesPerPacket = 2;
audioFormat.mBytesPerFrame = 2;
}
}
// This is a little bug fix for copying the magic cookie to the sound
file in recorder class:
AudioFileSetProperty (
theFile,
kAudioFilePropertyMagicCookieData, // <-- this parameter is wrong in
the original SpeakHere
propertySize,
magicCookie
);
// In the player class, a replacement for the playback callback:
static void playbackCallback (
void *inUserData,
AudioQueueRef inAudioQueue,
AudioQueueBufferRef bufferReference
) {
// This callback, being outside the implementation block, needs a
reference to the AudioPlayer object
AudioPlayer *player = (AudioPlayer *) inUserData;
if ([player donePlayingFile]) return;
UInt32 numBytes;
UInt32 numPackets = [player numPacketsToRead];
// This callback is called when the playback audio queue object has
an audio queue buffer
// available for filling with more data from the file being played
AudioFileReadPackets (
[player audioFileID],
NO,
&numBytes,
bufferReference->mPacketDescriptions,
[player startingPacketNumber],
&numPackets,
bufferReference->mAudioData
);
if (numPackets > 0) {
bufferReference->mAudioDataByteSize = numBytes;
bufferReference->mPacketDescriptionCount = numPackets;
AudioQueueEnqueueBuffer (
inAudioQueue,
bufferReference,
0,
NULL
);
[player incrementStartingPacketNumberBy: (UInt32) numPackets];
} else {
[player setDonePlayingFile: YES]; // 'donePlayingFile' used by this
callback and by setupAudioQueueBuffers
// if playback is stopping because file is finished, call
AudioQueueStop here;
// if user tapped Stop, then the AudioViewController calls
AudioQueueStop
if (player.audioPlayerShouldStopImmediately == NO) {
[player stop];
}
}
}
// In the player class, there are changes for setting up audio queue
buffers:
- (void) setupAudioQueueBuffers {
// calcluate the size to use for each audio queue buffer, and
calculate the
// number of packets to read into each buffer
[self calculateSizesFor: (Float64) kSecondsPerBuffer];
// prime the queue with some data before starting
// allocate and enqueue buffers
int bufferIndex;
bool isFormatVBR = (audioFormat.mBytesPerPacket == 0 ||
audioFormat.mFramesPerPacket == 0);
for (bufferIndex = 0; bufferIndex < kNumberAudioDataBuffers; +
+bufferIndex) {
// if you intend to support *only* constant bit-rate formats, you
can instead
// use AudioQueueAllocateBuffers. In this case, the playback
callback function
// needs to change; you need to change the arguments to the
// AudioQueueEnqueueBuffer function. See its reference documentation.
AudioQueueAllocateBufferWithPacketDescriptions (
[self queueObject],
[self bufferByteSize],
(isFormatVBR ? self.numPacketsToRead : 0),
&buffers[bufferIndex]
);
playbackCallback (
self,
[self queueObject],
buffers[bufferIndex]
);
if ([self donePlayingFile]) break;
}
}
-murray
On Oct 27, 2008, at 12:33 PM, William Stewart wrote:
On Oct 27, 2008, at 8:24 AM, Ignacio Enriquez wrote:
Thanks William.
It's been 3days, and I have been comparing the differences between
SpeakHere's AudioPlayer Class and AQPlay(from
/Developer/Examples/CoreAudio)
And have some comments and questions.
The main difference is the use of CAStreamBasicDescription instead of
AudioStreamBasicDescription in AQPlay.
CAStreamBasicDescription is not a Predefined Class. And Is to big for
me to understand it at this moment.
So my question is
Is Possible to manage mp3 files with AudioStreamBasicDescription
structures?
CAStreamBasicDescription is a C++ wrapper class around the
AudioStreamBasicDescription. For any CA api, you can use either
interchangeably. the advantage of using CAStream is that it does
alot for you (including a nice print option)
if YES, how can I do it? (sorry to ask this, but I really need a
hand please)
If NO, adding CAStreamBasicDescription to my iPhone program will be a
good idea? (I mean using CAStreamBasicDescription in my iPhone
program
project)
In the AQ code:
(1) The main difference is that SpeakHere does NOT play any file
that has variable bit rate (VBR) data, such as aac or mp3. The main
problem is that it doesn't deal with packet descriptions.
(2) AQPlay does all of this
Other than knowing that difference, I am not that familiar with that
code, so am not sure what changes you would need to make to
SpeakHere to make this work for you - Murray (cc'd) should be able
to help you
Bill
I actually want to be able to play, resume, stop, rewind and
fastforward some music. (which I am doing but only for WAV files.)
In my program I have like 20 5minutes long songs so, is
indispensable
to be able to play compressed files as mp3, as you can understand.
Thanks in advance.
Ignacio.
2008/10/25 William Stewart <email@hidden>:
The current Speak Hear code doesn't play compressed data.
AQ Play (/Developer/Examples/CoreAudio) has code that deals
correctly with
this, so in the meantime you can use this as a guideline
On Oct 24, 2008, at 1:51 AM, Ignacio Enriquez wrote:
Hi everyone.
I am very new to CoreAudio API, and trying to make my program play
*.mp3 files instead *.wav files.
As you know a *.wav file is like 10 times bigger than a mp3, so
that's
my principal reason.
I am basically using iPhone's sample code. SpeakHere Classes.
At first I thought it would be enough to change the 3rd parameter
of
AudioFileOpenURL function from "kAudioFileCAFType" to
"kAudioFileMP3Type", but it seems that this is just for opening the
file. So the mp3 file is not being played.
I wonder what other changes do I need to do in order to accomplish
mp3 files being played?
Thanks in Advance.
Ignacio.
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
@apple.com
This email sent to email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden