• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Output Capture
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Output Capture


  • Subject: Re: Output Capture
  • From: Mark Pauley <email@hidden>
  • Date: Wed, 18 Jul 2007 14:09:24 -0700

One method that quite a few people use is the excellent SoundFlower software. This is just a FIFO output component that you write into and can read from. It's essentially a loop-back audio device, so you set the output of your generator to SoundFlower, and then open the SoundFlower input via the HAL and slurp sample vectors, pushing the output to the desired final destination. You should be able to easily accomplish this with Max/MSP (by setting the device of a dac~ object to the soundflower and similarly the device of an adc~ object to the same soundflower). In your case, you may have to modify this by setting the default system out to soundflower and setting the output of the dac~ to the output component you wish to hear the sound on in Max.


_Mark


On Jul 18, 2007, at 1:29 PM, Richard Burnett wrote:

I apologize for bringing up a tangent question about the 'output capture' question on this list, as I know this is more in line for a quicktime question for one of their lists, but part of it has to do with CoreAudio.

I do in fact program in CoreAudio, but I also use the Max/MSP/Jitter application too for things I need to rapidly prototype. One of the issues they point out with QT7+ is that the output of QuickTime movies goes directly to CoreAudio where as in previous versions, it could be siphoned off into their max/msp environment. When I asked them about it, they said it was a core change of QT that caused this.

My question is, are their any mechanisms to how QT at least output to different locations or from now on, there are only plans to output directly to CoreAudio? I have written a video mixer that I'd love to implement some audio enhancements to (compression, excitement, etc).

When someone mentioned DRM, this is the only thing I could think of. Since QT is being used inside another program, I can't imagine it would be a small number of people that would want this ability (as in using QT libraries instead of recreating the wheel).

Again, sorry to steer off-topic here, but I find the people here INCREDIBLY helpful and it does have a portion to do with CoreAudio (at least from a QT standpoint).

Thanks!
Rick
Asylum Studio Productions
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden

_______________________________________________ Do not post admin requests to the list. They will be ignored. Coreaudio-api mailing list (email@hidden) Help/Unsubscribe/Update your Subscription: This email sent to email@hidden
  • Follow-Ups:
    • Re: Output Capture
      • From: email@hidden
References: 
 >Output Capture (From: email@hidden)
 >Re: Output Capture (From: Jeff Moore <email@hidden>)
 >Re: Output Capture (From: email@hidden)
 >Re: Output Capture (From: Jeff Moore <email@hidden>)
 >Re: Output Capture (From: Richard Burnett <email@hidden>)

  • Prev by Date: Re: Output Capture
  • Next by Date: Re: Output Capture
  • Previous by thread: Re: Output Capture
  • Next by thread: Re: Output Capture
  • Index(es):
    • Date
    • Thread