Re: Getting an AudioStreamBasicDescription from a stream, not a file
Re: Getting an AudioStreamBasicDescription from a stream, not a file
- Subject: Re: Getting an AudioStreamBasicDescription from a stream, not a file
- From: Jens Alfke <email@hidden>
- Date: Wed, 2 Apr 2008 09:55:21 -0700
On 2 Apr '08, at 8:42 AM, Chris Adamson wrote:
I'd hoped that maybe I could just set up my AudioFileStream and get
that property in a callback to my property-handilng method, but I
didn't get any property callbacks, at least not in the first 4 KB of
the stream.
I'm doing almost exactly what you're doing, but I'm not seeing those
problems. I wait for the file-stream's property callback to be called
with the kAudioFileStreamProperty_ReadyToProducePackets property,
before creating my AudioQueue. Then I get the
kAudioFileStreamProperty_DataFormat and use the result when creating
the audio queue.
One possibly relevant difference is that the stream I'm reading is not
ShoutCast-style MP3 stream, rather an MP3 file being sent over a
socket by a peer. But that shouldn't make a difference — from
experience I've found you can basically chop up an MP3 file any way
you like and the fragments are still playable. The decoder waits till
it sees the bit pattern that starts the next MPEG frame, then begins
decoding from there.
Have you tried feeding the AudioFileStream more than 4k of data? Maybe
the first full frame begins more than 4k into the stream. (This
happens often with MP3 files, since ID3 tags go at the beginning, and
they're just noise as far as the decoder is concerned. Sometimes my
code has to skip 100k or more into an MP3 file before it starts
hitting MPEG frames.)
—Jens
Attachment:
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden