Re: Using an AU directly
Re: Using an AU directly
- Subject: Re: Using an AU directly
- From: Jean-Daniel Dupas <email@hidden>
- Date: Tue, 8 Sep 2009 16:50:45 +0200
Even with this explanation, I don't understand why you need it real
time. That's the output device's job to determine the speed. If you
don't have output device, you don't need to process the data real-time.
If you don't want to process the data to fast (even if I don't
understand why), just compute how much frames you should consume
during 1 second, and set a timer that will trigger each second to
process this amount to data and store the analyzed data in a buffer.
If you want a device/engine sample code, you can have at the
soundflower open source project:
http://code.google.com/p/soundflower/
It may be a good starting point if you go the fake device way.
Le 8 sept. 2009 à 15:14, Darrell Gibson a écrit :
Ross/Brian,
Thanks for your replies. I hope you don't mind, but I'll try to
answer both of you with one post (don't want the thread getting too
long!). Okay I'll try and summaries. Please bear in mind that I am
still thinking about this and I have not actually tried it yet.
I want to host an AU network (one AU to start with). However, I
don't want the host to output the audio, but to store it so I can
then analyze the results. I want the network to process the data
being "pulled" through the network real-time so the analysis can be
done real-time (and at a later date I may also pass the data on to
another system that will generate an audio output). From what I
have now been told, I surmise that what I need to do is create my
own device that will "mimic" a hardware device and will "pull" data
through the network in real-time, as a real device would. The only
difference being the audio will not be sent to the audio hardware,
but stored so the analysis can be perform. The "difficulty" I have
is really a lack of knowledge. As I have only ever used the higher-
level APIs I am hazy on what an output device actually does in order
to be able to "pull" data through the network in real-time. If I
can understand how an output device operates I should be able to
write my own device that will render the data into memory. This is
why I'm after any pointer that will help to shed any light on how
the output device initiates the pull of data through any AU
network. Unless I've missed it I have not seen any examples or
documentation that explains this.
Thanks again. Your replies are much appreciated and if you can
offer any further details it would be greatly appreciated.
Darrell.
________________________________________
From: Ross Bencina [email@hidden]
Sent: 08 September 2009 06:19
To: Darrell Gibson; email@hidden
Subject: Re: Using an AU directly
Hi
I'm following this with interest since I need to implement the same
thing in
the coming months. I already have my own cross-platform graph
framework I
just need to adapt individual AUs to work with my synchronous
evaluation
graph (which, btw, currently uses a pre-compiled evaluation schedule
not a
run-time pull graph traversal).
Darrell: can you please summarise what you are finding difficult about
invoking the AU synchronously -- all this multi-threading hackery
you're
discussing is definitely something I intend to avoid...
Thanks!
Ross.
===================================
AudioMulch 2.0 is here!
http://www.audiomulch.com
----- Original Message -----
From: "Darrell Gibson" <email@hidden>
To: <email@hidden>
Sent: Monday, September 07, 2009 11:14 PM
Subject: RE: Using an AU directly
Th,
Unless I am missing something an AUGraph has to have an output unit/
device.
This is kind of the same problem I'm running into now without using
the
AUGraph, but when I started out I thought not using an AUGraph would
allow
me to create a network of AUs without an output device. I now
realize the
same problem exists as there needs to be something to initiate the
"pull"
and as I result I don't think it would matter if I used a graph or
not.
Darrell.
________________________________________
From: coreaudio-api-bounces+gibsond=email@hidden
[coreaudio-api-bounces+gibsond=email@hidden] On
Behalf
Of tahome izwah [email@hidden]
Sent: 07 September 2009 13:08
To: email@hidden
Subject: Re: Using an AU directly
I can't help you with this I'm afraid, but I am curious: what is your
rationale behind not wanting to use an AUGraph?
--th
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
BU - the UK's Number One New University
The Guardian University Guide 2009 & 2010
This email is intended only for the person to whom it is addressed
and may
contain confidential information. If you have received this email in
error,
please notify the sender and delete this email, which must not be
copied,
distributed or disclosed to any other person.
Any views or opinions presented are solely those of the author and
do not
necessarily represent those of Bournemouth University or its
subsidiary
companies. Nor can any contract be formed on behalf of the
University or its
subsidiary companies via email.
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
-- Jean-Daniel
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden