• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: A couple of high-level questions
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: A couple of high-level questions


  • Subject: Re: A couple of high-level questions
  • From: William Stewart <email@hidden>
  • Date: Wed, 4 Feb 2004 16:52:02 -0800

We're about to rev the CoreAudio SDK (yes, I know I keep saying this, but really we are going to soon!)

That has some splendid examples to take files, convert them into PCM (regardless of their original format), including an example we've just finished adding that will also read data from a QT movie and let you play it through Core Audio directly. I think that would be a good basis for the kind of thing you are proposing

Bill

On 03/02/2004, at 9:05 PM, Jens Bauer wrote:

Hi Hamish,

I can't answer the audio-questions, I'll leave those up to the experts. :)

But when it comes to displaying information in real-time, I have some (minor) experience.
When I wrote an oscilloscope, I found out that...
* Using NSBezierPath is a very stupid idea. =)
* Using NSRectFill is performing a bit better, but not excellent.
* QuickDraw seems to work faster.
* Drawing directly into an NSImage and blitting that onto the screen seems to work for small oscilloscopes.
* Using OpenGL and drawing lines using quads seems to be excellent.

(if anyone disagrees, please override my email)


On Wednesday, Feb 4, 2004, at 05:06 Europe/Copenhagen, Hamish Allan wrote:

I'm planning to write an application with an interface allowing for direct interaction with a visual representation of an audio file (c.f., Sound Studio or Soundtrack Loop Utility).

I want to be able to read a variety of file formats (AIFF, WAV, MP3, AAC), but get access to PCM data for direct manipulation, Fourier analysis, etc. I am assuming that this can be achieved through QuickTime?

Then I want to be able to correlate the visual representation with the audio during playback (e.g., the canonical line which moves across the screen showing which part of the sample is currently playing). I'm thinking I would do this by invalidating a custom NSView during those callbacks in which I calculate that the line needs moved. Is this a reasonable way to do it?

Is CoreAudio better suited to this purpose than the Carbon Sound Manager? Also, I am wondering what I might gain (apart from minimal performance increments) by using CoreAudio instead of PortAudio?


Love,
Jens
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives: http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.


-- mailto:email@hidden
tel: +1 408 974 4056

________________________________________________________________________ __
"Much human ingenuity has gone into finding the ultimate Before.
The current state of knowledge can be summarized thus:
In the beginning, there was nothing, which exploded" - Terry Pratchett
________________________________________________________________________ __
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives: http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.
References: 
 >Re: A couple of high-level questions (From: Jens Bauer <email@hidden>)

  • Prev by Date: Re: Sample-rate conversion
  • Next by Date: Re: Logic GUI Offset
  • Previous by thread: Re: A couple of high-level questions
  • Next by thread: Sample-rate conversion
  • Index(es):
    • Date
    • Thread