Re: A Rosetta wrapper AU
Re: A Rosetta wrapper AU
- Subject: Re: A Rosetta wrapper AU
- From: "Angus F. Hewlett" <email@hidden>
- Date: Wed, 13 Dec 2006 09:09:58 +0000
Sounds workable... the one issue you will run in to is that interprocess
communication always has a latency penalty, so you'll need to do some
extra buffering to account for that so as not to be waiting for the
Rosetta process to do its thing during the Intel process' audio thread.
Best regards,
Angus.
Andrew Barnert wrote:
Hi there; this is my first post here. Sorry if it's a bit long; I'll
give a short summary first:
I'm considering writing a wrapper AU to allow me to run PPC-only AUs
in native hosts on my Intel Macs. It'll spawn a minimal host under
Rosetta and marshal all the commands and callbacks over the process
boundary.
If this is an obviously dumb idea, or if it's already been done, let
me know.
If you're interested in more details, read on. (I have even longer
notes, if anyone's interested.)
--The Problem--
I have some songs that use PPC-only components, at least two of which
(AlphaKanal's Buzzer and Buzzer2) will probably never get ported;
others will require money to upgrade, and it's not worth it if I don't
intend to use them in new songs. So, I'd like some way to use them on
my Intel iMac (without having to run GB or Logic and all of the other
components in Rosetta--although it does seem to run at least as well
as my G4 eMac natively...).
--Three Solutions--
1. Run GB in Rosetta, lock or bounce any tracks that use PPC-only
components, switch back to native, and don't touch those tracks. If I
want to make any changes (e.g., to duplicate a verse across all
tracks), I have to quit, switch back to Rosetta, unlock, change, lock,
quit, and switch back. This requires that none of my components be
Intel-only (which is so far not a problem, but could be some day), and
it requires hacking the app's plist. More importantly, it's a big pain.
2. Use AUNetSend and AUNetReceive, and run another AU host (like
AULab) under Rosetta and have it receive the audio, process it, and
send it back. This only works for effects, but I think something
similar could be set up for instruments using the free midiO
generator? It also seems like a pretty heavy-weight solution; AULab
does much more than I need, and so does AUNetSend/Receive. Also, I've
tried this, and sometimes I can't get it to work. On top of that, you
have to switch to a separate app to, e.g., change effect parameters
(no using the iControl...), and some other things don't work the way
you expect (e.g., to reset/panic, you have to do it in both apps)
3. Write a wrapper that does something similar but lighter and more
complete: a component with a trivial AU host embedded as a resource
that it spawns as a subprocess under Rosetta, and marshall all the
calls between them. This doesn't look like that much work (although
it's not trivial), and it seems like it could be useful. Especially
with an fxpansion-like wrapper generator (give it a PPC component, and
it gives you a wrapped-up bundle that's ready to be used directly in
Intel apps).
The third solution could also be used for other things, like creating
an effect chain that acts like a single effect as far as GB is
concerned, or wrapping a VST host instead of an AU host, but those
would both take a lot of extra effort that I don't intend to do
(especially when fxpansion already has a nice VST wrapper).
--Implementation Ideas--
I've never written an AU component or host before, but I've done
similar things with similar APIs (like VST).
I've skimmed through the docs (which, by the way, are the worst I've
ever seen from Apple) and followed the tutorial to generate a simple
effect and read through the code in AUBase.cpp and so on, and I think
it's doable. There are about a dozen functions and a half-dozen
callbacks to wrap, all but Render and Get/SetProperty trivial (and
with those, the only real issue seems to be tediously going through
each property type and figuring out how to marshal it).
I think it's doable.
(And it's also a good way to familiarize myself with AU, of course.)
My initial plan is to use a line-based, whitespace-separated,
human-readable text protocol over a pair of Unix sockets (one for
commands, one for callbacks--then they can both be stupid and
synchronous, I think) and marshal everything, even buffer lists. Of
course this will be slow, but it'll be easy to build and debug.
Then, when I've got the basics working, I can push bulk data
out-of-band (through shmem or something--what's the fastest way to
byteswap a long string of Float32s, by the way?), see if anything can
be usefully cached in the host, and/or do anything other optimization
that seems worthwhile.
So, should I go ahead with this? And, if so, does anyone want to see
more detailed notes?
Finally, if I open source it, would anyone want to contribute?
(Otherwise, I plan to get it good enough to run the handful of
components I care about, and that's it.)
Thanks.
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden