On May 2, 2007, at 3:40 PM, Paul Miller wrote: Peter Litwinowicz wrote: Earlier I said; Well, there you are. It does start breaking down. Which is why AE always provides full frames, just time-sampled at the frame rate of fields, when appropriate. AE converts EVERYTHING to frames and works from there, and then 60i footage gets processed as 60p... Then these issues go away (with the problem that it is sometimes almost impossible to identify what was original interlaed material). Inefficient for a realtime editing app, of course.
Note the Premiere Pro actually hands us full-res frames always. And it is an editing app... So it *can* be done in an editing app.
Full-size frames are nice.
I must admit I've been very spoiled by the way AE/Premiere does it.
Here's a random question for all of you - AE/Premiere are doing a lot of resampling when they hand you full-size frames that were generated from fields, right? (Or are they just line-doubling the fields?)
In Motion, when the user sets the canvas to display at half-res, our internal filters will take a full-sized frame as input (since that's what actually comes out of the file) and output a half-res frame because this is faster to do than resampling. (In reality, the resampling happens in our fragment program through OpenGL magic, but it's a lot faster than resampling first, then giving the resampled input to the filter.) It's very odd to most 3rd party developers when they first run into it, and I think we've "fixed" things so that 3rd party filters no longer get this case in Motion 3. (But don't quote me on that.)
Would it be helpful and faster, or just too confusing, to hand your plugins everything in its native format? If we had a resampling function of some sort that you could call when you didn't want to handle different sizes, would that help?
I agree that allowing the plugins to tell us what they want us to give them, and what they want to return to us would be ideal for 3rd party developers. But we'll have to see what kind of rearchitecting that would require on our end(s), too. We have some other wacky ideas in this area, as well, but our impression has been that most 3rd party developers want to do as little worrying about things like coordinate spaces, resolution, fields, etc. as possible and concentrate on writing image processing routines. I realize that doesn't describe everyone, though. For someone writing a really great deinterlacer, it's absolutely required to be able to get fields, whereas someone writing a blur might prefer to never see fields at all.
The more I work in this field the more I hate interlaced footage. :) And non-square pixels. Don't get me started.
Darrin
|