Thanks for the explanation Darrin.
So while we can do a couple of our simple generators using basic OpenGL, the text based ones really need Quartz AND hardware rendering performance. Since CGGLContext is deprecated, is it possible to draw into a CGBitmapContext and then output that to the texture? Martin Baker ------------------------------------------------------ Digital Heaven Ltd is registered in England and Wales Company registration number: 3628396 Registered address: 55 Lynwood Drive, Worcester Park, Surrey KT4 7AE VAT registration number: GB 736 0989 03
On 1 Aug 2011, at 18:55, Darrin Cardani wrote: On Jul 30, 2011, at 6:23 PM, Pierre Jasmin wrote:
On 7/30/2011 3:15 AM, Martin Baker wrote:
We're seeing that software rendering for a generator is extremely slow. All these are on a Motion 5 project at 1920x1080, 29.97fps, progressive at full res.
...
So even before we add any custom code we're hampered by using software rendering, and this is just a simple generator with an output only not a filter with input/output.
How does Motion handles canDoHardware, canDoSoftware
// graphics hardware assisted rendering versus cpu only rendering
For example if one had a motion template with
effect 1 - canDoHardware=NO, canDoSoftware =YES
(your case 3)
effect 2 - canDoHardware=YES, canDoSoftware =YES
<does motion go GPU here and move data to GPU and back out?>
effect 3 - canDoHardware=NO, canDoSoftware =YES
With empty renderOutput methods, are you in a scenario like that
then regressing to something like 6 FPS playback with empty methods?
(less/more)
I would be curious of your render times: 1 versus 1,3 versus 1,2,3
So here's what happens. If your plug-in says it can render using hardware and Motion is rendering hardware, everything stays on the GPU. Likewise, if your plug-in says it can render software and Motion is rendering in software (this only happens when inside a Motion template that's background rendering for FCPX, I believe), everything stays in software. If your plug-in says it can only render in one or the other, and the app is using the opposite method, we have to move between the GPU and CPU. This should all be working correctly for the simple case of a single generator in the timeline.
For the more complex case where there are multiple effects some of which are CPU-only and some of which are GPU-only, we very likely do the transfer more often than we need to. For example, if the app is rendering on the GPU and you have a GPU-only generator with 2 CPU-only filters, we'll render the generator on the GPU, then download the output to the CPU, apply the first filter, upload the result to the GPU, then turn right around and download the result back to the CPU to apply the next filter, and then upload the result to the GPU.
This is obviously not ideal. I hope we can fix it in the future. Please file a bug if you'd like to see it fixed, too. The obvious thing you can do to help your own product is to make your filter or generator work in both scenarios. It's more work for you, but it's optimal for your users regardless of what changes we make in the future.
Thanks! Darrin
|