Re: A Rosetta wrapper AU
Re: A Rosetta wrapper AU
- Subject: Re: A Rosetta wrapper AU
- From: Andrew Barnert <email@hidden>
- Date: Thu, 14 Dec 2006 18:06:00 -0800
Thanks. I'm not sure I get what you're suggesting, but I think it's
one of the following:
1. Do the marshaling solution, but instead of pushing large buffers
through shared memory and byteswapping them, intercept the relevant
calls higher up the chain and handle them via audio and MIDI routing.
This seems like it'll be more work than just stupidly marshaling/
shming everything (I have to write input and output plugins, etc.).
I'd also I need to fully understand what every call in the API
actually does, rather than just knowing the data types of each
parameter to each function. And it doesn't seem to remove any of the
hard work--for example, I still have to know what my latency is when
the host asks for kAudioUnitProperty_Latency. And I can't imagine
CoreAudio routing will be more efficient than shm.
2. Just do audio (and MIDI) routing and don't send anything else. (I
think I'd also need to write both audio and MIDI input devices on the
PPC side so I can push data in, right?)
This would be an improvement over the AUNetSend/midiO and
AUNetReceive solution, as it (1) doesn't require two plugins on each
side, (2) doesn't require either forcing a generator into an effect
chain or using two separate tracks, (3) is much easier for the end
user, and (4) doesn't have the overhead of TCP and a heavy-duty host.
Don't I still have to figure out what latency to report to the host,
and do a lot of the typical component stuff (which will presumably be
different for each type of component)?
More importantly, can this solve the other problems with this
solution (parameter storage and automation, GUI control, reset/panic,
etc.)? If I can't load up one of my old PPC songs with a wrapped
version of each PPC-only plugin and, e.g., get all my old settings,
that doesn't get me what I want.
Still, this could be useful, and it leads to an easy (easier...) way
to map between the 38,000 plugin formats, and it would be a good way
to learn all parts of CoreAudio instead of just the AU interface.
On 14 Dec 2006, at 16:07, Mark Pauley wrote:
You might want to consider:
a) Creating an AU output device that can be written to from a
Rosetta process and read from a Native process
b) Creating an AU wrapper that fork-exec's a rosetta stub-host if
one is not alive
c) Sending a message to the rosetta stub-host that tells it to load
a given ppc AU
d) In the Rosetta stub-host, connect the plugin to the input side
of the AU output device
e) From your native wrapper, read from the output side of the
Rosetta AU output device
basically, you use an AU device to slingshot around Rosetta, while
still leveraging CoreAudio to keep the timing for you.
Would that work? I know I'm missing some necessary details.
_Mark
On Dec 14, 2006, at 11:31 AM, Andrew Barnert wrote:
Thanks.
Part of the reason I'm going with an incredibly slow API for the
first draft is to figure out exactly what I need to do about the
latency issue.
I think (but I'm just learning AU, so I could be wrong...) that I
can just do everything directly, and bump up
kAudioUnitProperty_Latency appropriately, and the host has to take
care of dealing with that extra latency. (Less-sophisticated hosts
may not use this info, but I'm assuming GB, Logic, etc. do.)
The only issue is this:
> If the sample latency for your audio unit varies, use this
property to report the
> maximum latency. Alternatively, you can update the
kAudioUnitProperty_Latency
> property value when latency changes, and issue a property change
notification using
> the Audio Unit Event API.
So I have to know the maximum latency--and this is almost
certainly based on the maximum command size, which is based almost
entirely on the maximum buffer list size, which I can't know a
priori (can I?).
Maybe I can just keep track of the slowest command I've seen so
far (or since the last Reset?) and update through notifications,
something like this pseudocode (aka Python):
# Called by each command before it does the marshaling
def PreMarshal(self, ci, scope):
self.start = clock()
# Called by each command just before returning
def PostDemarshal(self, ci, scope):
latency = clock() - self.start
if latency - self.wrappedlatency > self.maxlatency:
self.maxlatency = latency - self.wrappedlatency
self.NotifyPropertyChange(ci, scope, kAudioUnitProperty_Latency,
[self.maxlatency + self.wrappedlatency])
def GetProperty(self, ci, scope, element, property, *data):
self.PreMarshal(ci, scope)
# do all the real work
self.PostDemarshal(ci, scope)
if element == kAudioUnitProperty_Latency:
if result != noErr:
result = noErr
data[0] = 0
self.wrappedlatency = data[0]
data[0] += self.maxlatency
return result
def HandleNotifyPropertyChange(self, ci, scope, element,
property, *data):
# Do all the demarshaling
if element == kAudioUnitProperty_Latency:
self.wrappedlatency = data[0]
data[0] += self.maxlatency
SendNotifyPropertyChange(self, ci, scope, element, property, *data)
Are there different latencies for different scopes, or can I just
keep a single value around?
On 13 Dec 2006, at 01:09, Angus F. Hewlett wrote:
Sounds workable... the one issue you will run in to is that
interprocess communication always has a latency penalty, so
you'll need to do some extra buffering to account for that so as
not to be waiting for the Rosetta process to do its thing during
the Intel process' audio thread.
Best regards,
Angus.
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
40mac.com
This email sent to email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden