• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Passing parsed packets from Audio File Stream Services to Audio Queue
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Passing parsed packets from Audio File Stream Services to Audio Queue


  • Subject: Passing parsed packets from Audio File Stream Services to Audio Queue
  • From: Rick Mann <email@hidden>
  • Date: Sun, 8 Nov 2009 22:26:39 -0800

Starting from the Audio How-To:

How do I play streamed audio?

To play streamed audio, you connect to a network stream using the CFNetwork interfaces from Core Foundation, such as those in CFHTTPMessage. You then parse the network packets into audio packets using Audio File Stream Services (AudioToolbox/AudioFileStream.h). Finally, you play the audio packets using Audio Queue Services (AudioToolbox/AudioQueue.h). You can also use Audio File Stream Services to parse audio packets from an on-disk file.

I've written code that streams in data and calls Audio File Services to parse the data stream. This works, in that I get a handful of properties and I get called back with packets.


I've also written most of the code for the Audio Queue, but I can't figure out how to connect the two. The example illustrated in "Audio Queue Services Programming Guide" figures out in advance a max packet size, and then does a lot of stuff that depends on that. The example illustrates reading audio data from a file, but lacks details when applied to streaming audio.

When streaming, how do I determine an appropriate Audio Buffer Size?

It seems like the parser callback provides the data for a direct call to AudioQueueEnqueueBuffer(), but as I understand it, I'm not supposed to call that until the Audio Queue calls me back. Which suggests I have to buffer the data in the first callback and make it available in the second.

But that means I have to copy the data, because otherwise it's got to be deallocated by File Services. And, do I need to synchronize this buffering, because there are two threads calling me (one when network data arrives, is parsed and then packets are passed to me, and another when the Audio Queue calls me for more data)?

Is there any example code that shows how to stream?

Hmm. I did a little more Googling and found this: http://cocoawithlove.com/2008/09/streaming-and-playing-live-mp3-stream.html

However, I'm posting this email because I think there's a big gap in the documentation, and I wanted to see what other information the community can provide.

Thanks!

--
Rick

_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden


  • Follow-Ups:
    • Re: Passing parsed packets from Audio File Stream Services to Audio Queue
      • From: Doug Wyatt <email@hidden>
  • Prev by Date: Re: property listeners and RT
  • Next by Date: Re: synchronization of input and output
  • Previous by thread: Re: property listeners and RT
  • Next by thread: Re: Passing parsed packets from Audio File Stream Services to Audio Queue
  • Index(es):
    • Date
    • Thread