Re: A Rosetta wrapper AU
Re: A Rosetta wrapper AU
- Subject: Re: A Rosetta wrapper AU
- From: alex <email@hidden>
- Date: Sat, 16 Dec 2006 11:44:55 -0800
One small note, Digidesign will not allow you to write a wrapper for RTAS plugins without their express permission. This is a proprietary plugin format.
alex
At 10:45 AM -0800 12/15/06, Andrew Barnert wrote:
>By the way: I've sketched out a rough design for the socket API, figured out a layout for the wrapped plugin bundle written a prototype wrapper generator, etc. If anyone wants to look it over, let me know. Meanwhile, I'm building a simple test wrapper that generates silence while marshaling the parameter calls to/from a dummy app, to make sure the idea makes sense. (So far, I like the AU API much better than VST in general, may Charlie S. forgive me, but I'm amazed how much of what looks like backward-compatibility cruft is there in such a comparatively new format. And how much worse the documentation is than most of what comes from Apple.)
>
>Anyway, onto your ideas: Yes, you got the point! I want to pass the identification stuff from the wrapped component as the wrapper's, and write a 100%-faithful wrapping around the API, exactly so that I can load up my old songs and they'll just work.
>
>Also, a stupid marshaler that doesn't understand anything about the requests beyond the data types should be easier than trying to do things at a higher level. (I think I have to break that a bit in a few places, like the latency property, but hopefully not too many.) It also means that you could use all kinds of components through the same wrapper, instead of having separate wrappers for effects, generators, instruments, etc.
>
>It's funny; I didn't even think about writing a wrapper using Classic, even though I did think about Windows and linux plugins. I guess Apple's got me pretty solidly converted to Intel....
>
>Anyway, the idea of wrapping VST, RTAS, MAS, DX, LADSPA/DSSI, MESS, etc. is the first thing everyone suggests. The problem is that this would require a very different solution. If the APIs were 1-to-1 matches, there wouldn't be 30,000 separate plugin formats in the first place. A VST plugin doesn't know how to do ramped parameter changes, a DX plugin can't deal with interleaved buffers or side chains, an MAS host wouldn't know what to do with a Cocoa GUI, no format is going to send the callbacks another format expects (even if they all used the same basic push/pull model), etc. So the wrappers would be pretty heavy-weight, to say the least.
>
>Mark Pauley's suggestion of using audio and MIDI routing instead of direct API marshaling (I think that's what he suggested) is a better way to do that. It should be relatively easy to map reasonably well between dozens of different formats that way (one host per original format, and one wrapper plugin for each Intel-native format that you care about--N+M instead of NxM converters).
>
>But it wouldn't get you the benefit of being able to load up your old songs and sessions, which is critical to what both you and I want. So I think that's a separate (maybe easier?) project, even if it sounds very similar at first glance.
>
>Anyway, thanks for your input.
>
>On 14 Dec 2006, at 23:11, Kenneth Weiss wrote:
>
>>This is a great initiative, I was thinking about something similar but never got round to doing it.
>>What really bugs me when the architecture changes (OS9-OSX-Intel-64), I loose all the plugins I like, and loose all my old sessions.
>>Small companies that may not have the manpower to update them or I have to wait till all the bigger companies update their line of products, that can take between months to years.
>>So for me session compatibility is a major issues since I can't load many of my old sessions.
>>
>>You could make the AU insert "fake" the manufacture and the component type so that you can get session compatibility with older Rosetta session.
>>
>>If you are really brave, try making a host that will run under the Classic OS 9 emulator, and transmit the audio to it !!! you could save all those amazing plugins I had back then !!!
>>
>>And If you ever get that host up and running, why don't you load VST and RTAS plugins along the way and translate them into AU format???
>>
>>Cheers,
>>Kenneth
>>
>>
>>-----Original Message-----
>>From: coreaudio-api-bounces+kenneth=email@hidden on behalf of Andrew Barnert
>>Sent: Thu 12/14/2006 9:06 PM
>>To: Mark Pauley
>>Cc: email@hidden
>>Subject: Re: A Rosetta wrapper AU
>>
>>Thanks. I'm not sure I get what you're suggesting, but I think it's
>>one of the following:
>>
>>1. Do the marshaling solution, but instead of pushing large buffers
>>through shared memory and byteswapping them, intercept the relevant
>>calls higher up the chain and handle them via audio and MIDI routing.
>>
>>This seems like it'll be more work than just stupidly marshaling/
>>shming everything (I have to write input and output plugins, etc.).
>>I'd also I need to fully understand what every call in the API
>>actually does, rather than just knowing the data types of each
>>parameter to each function. And it doesn't seem to remove any of the
>>hard work--for example, I still have to know what my latency is when
>>the host asks for kAudioUnitProperty_Latency. And I can't imagine
>>CoreAudio routing will be more efficient than shm.
>>
>>2. Just do audio (and MIDI) routing and don't send anything else. (I
>>think I'd also need to write both audio and MIDI input devices on the
>>PPC side so I can push data in, right?)
>>
>>This would be an improvement over the AUNetSend/midiO and
>>AUNetReceive solution, as it (1) doesn't require two plugins on each
>>side, (2) doesn't require either forcing a generator into an effect
>>chain or using two separate tracks, (3) is much easier for the end
>>user, and (4) doesn't have the overhead of TCP and a heavy-duty host.
>>
>>Don't I still have to figure out what latency to report to the host,
>>and do a lot of the typical component stuff (which will presumably be
>>different for each type of component)?
>>
>>More importantly, can this solve the other problems with this
>>solution (parameter storage and automation, GUI control, reset/panic,
>>etc.)? If I can't load up one of my old PPC songs with a wrapped
>>version of each PPC-only plugin and, e.g., get all my old settings,
>>that doesn't get me what I want.
>>
>>Still, this could be useful, and it leads to an easy (easier...) way
>>to map between the 38,000 plugin formats, and it would be a good way
>>to learn all parts of CoreAudio instead of just the AU interface.
>>
>>On 14 Dec 2006, at 16:07, Mark Pauley wrote:
>>
>>>You might want to consider:
>>>
>>>a) Creating an AU output device that can be written to from a
>>>Rosetta process and read from a Native process
>>>b) Creating an AU wrapper that fork-exec's a rosetta stub-host if
>>>one is not alive
>>>c) Sending a message to the rosetta stub-host that tells it to load
>>>a given ppc AU
>>>d) In the Rosetta stub-host, connect the plugin to the input side
>>>of the AU output device
>>>e) From your native wrapper, read from the output side of the
>>>Rosetta AU output device
>>>
>>>basically, you use an AU device to slingshot around Rosetta, while
>>>still leveraging CoreAudio to keep the timing for you.
>>>
>>>Would that work? I know I'm missing some necessary details.
>>>
>>>
>>>_Mark
>>>
>>>On Dec 14, 2006, at 11:31 AM, Andrew Barnert wrote:
>>>
>>>>Thanks.
>>>>
>>>>Part of the reason I'm going with an incredibly slow API for the
>>>>first draft is to figure out exactly what I need to do about the
>>>>latency issue.
>>>>
>>>>I think (but I'm just learning AU, so I could be wrong...) that I
>>>>can just do everything directly, and bump up
>>>>kAudioUnitProperty_Latency appropriately, and the host has to take
>>>>care of dealing with that extra latency. (Less-sophisticated hosts
>>>>may not use this info, but I'm assuming GB, Logic, etc. do.)
>>>>
>>>>The only issue is this:
>>>>>If the sample latency for your audio unit varies, use this
>>>>property to report the
>>>>>maximum latency. Alternatively, you can update the
>>>>kAudioUnitProperty_Latency
>>>>>property value when latency changes, and issue a property change
>>>>notification using
>>>>>the Audio Unit Event API.
>>>>
>>>>So I have to know the maximum latency--and this is almost
>>>>certainly based on the maximum command size, which is based almost
>>>>entirely on the maximum buffer list size, which I can't know a
>>>>priori (can I?).
>>>>
>>>>Maybe I can just keep track of the slowest command I've seen so
>>>>far (or since the last Reset?) and update through notifications,
>>>>something like this pseudocode (aka Python):
>>>>
>>>> # Called by each command before it does the marshaling
>>>> def PreMarshal(self, ci, scope):
>>>> self.start = clock()
>>>>
>>>> # Called by each command just before returning
>>>> def PostDemarshal(self, ci, scope):
>>>> latency = clock() - self.start
>>>> if latency - self.wrappedlatency > self.maxlatency:
>>>> self.maxlatency = latency - self.wrappedlatency
>>>> self.NotifyPropertyChange(ci, scope, kAudioUnitProperty_Latency,
>>>> [self.maxlatency + self.wrappedlatency])
>>>>
>>>> def GetProperty(self, ci, scope, element, property, *data):
>>>> self.PreMarshal(ci, scope)
>>>> # do all the real work
>>>> self.PostDemarshal(ci, scope)
>>>> if element == kAudioUnitProperty_Latency:
>>>> if result != noErr:
>>>> result = noErr
>>>> data[0] = 0
>>>> self.wrappedlatency = data[0]
>>>> data[0] += self.maxlatency
>>>> return result
>>>>
>>>> def HandleNotifyPropertyChange(self, ci, scope, element,
>>>>property, *data):
>>>> # Do all the demarshaling
>>>> if element == kAudioUnitProperty_Latency:
>>>> self.wrappedlatency = data[0]
>>>> data[0] += self.maxlatency
>>>> SendNotifyPropertyChange(self, ci, scope, element, property, *data)
>>>>
>>>>Are there different latencies for different scopes, or can I just
>>>>keep a single value around?
>>>>
>>>>On 13 Dec 2006, at 01:09, Angus F. Hewlett wrote:
>>>>
>>>>>Sounds workable... the one issue you will run in to is that
>>>>>interprocess communication always has a latency penalty, so
>>>>>you'll need to do some extra buffering to account for that so as
>>>>>not to be waiting for the Rosetta process to do its thing during
>>>>>the Intel process' audio thread.
>>>>>
>>>>>Best regards,
>>>>> Angus.
>>>>
>>>>_______________________________________________
>>>>Do not post admin requests to the list. They will be ignored.
>>>>Coreaudio-api mailing list (email@hidden)
>>>>Help/Unsubscribe/Update your Subscription:
>>>>40mac.com
>>>>
>>>>This email sent to email@hidden
>>>
>>
>> _______________________________________________
>>Do not post admin requests to the list. They will be ignored.
>>Coreaudio-api mailing list (email@hidden)
>>Help/Unsubscribe/Update your Subscription:
>>
>>This email sent to email@hidden
>>
>>
>
>_______________________________________________
>Do not post admin requests to the list. They will be ignored.
>Coreaudio-api mailing list (email@hidden)
>Help/Unsubscribe/Update your Subscription:
>
>This email sent to email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden