site_archiver@lists.apple.com Delivered-To: pro-apps-dev@lists.apple.com On Feb 4, 2008, at 10:18 PM, Steve Christensen wrote: rendering at kFxDepth_UINT8: rowCount = 2648 columnCount = 2276 inputRowBytes = 9104 outputRowBytes = 9104 rendering at kFxDepth_FLOAT32: rowCount = 2048 columnCount = 2048 inputRowBytes = 32768 outputRowBytes = 32768 glTexParameteri (GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri (GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); Let me know if that helps at all or if you have any other questions. Darrin -- Darrin Cardani dcardani@apple.com _______________________________________________ Do not post admin requests to the list. They will be ignored. Pro-apps-dev mailing list (Pro-apps-dev@lists.apple.com) Help/Unsubscribe/Update your Subscription: http://lists.apple.com/mailman/options/pro-apps-dev/site_archiver%40lists.ap... I have been testing some effects on both 8-bit and 32-bit (float) channel images and ran across some weird (to me) behavior. After launching Motion, I dragged a large (2276x2649) Photoshop file into the middle of the workspace and applied a filter. This worked fine with an 8-bit depth. I then opened the project properties window and changed to 32-bit float depth, then clicked the OK button. Motion popped up a sheet saying that the object is too large to render and that it would be cropped to 2048x2048. My filter was called to render, and then the final image looks like the edge pixels were repeated to fill out the original image size. (I didn't do this since the filter was passed a 2048x2048 image. Why am I seeing the difference, especially given that the original image is larger than 2048x2048? I'd expect the same message for the 8-bit case. Here are the numbers Motion is passing to my plugin: The reason you see this message is because Motion has to guess how much of your computer's VRAM your plugin is going to use up during processing. Since it can't know for sure how many pBuffers, etc. your plugin may allocate, it assumes that it will probably not use more than 4 times the size of the image at any one time. So it asks the video card how much VRAM it has, and also what the size of the largest texture it can handle is. Those things are controlled by a combination of your hardware and the video card drivers. Most modern cards can handle textures which are 4k on a side (4096x4096). However, when you switch to 32-bit per channel processing, that takes up 4 times as much memory per texture, so we may estimate that you will be able to process less and we'll still downsample for you. This can happen in 8- bit per channel mode, too, if you're on a card with less VRAM (such as a 9600 with 64 Megs and 2 monitors, for example). There are better ways of handling this situation and we're always trying to improve them, but that's the current situation. Are you working in hardware or software? It sounds like hardware. If that's the case, the clamping to edges is part of the OpenGL state and you can change it. I believe you need to call: You can use just GL_CLAMP to get the behavior you see right now, or GL_REPEAT to tile the image. (And you should probably first save the current state with a call to glGetTexParameteri () and then restore it after your processing.) This email sent to site_archiver@lists.apple.com