Re: Creating Video Software
Re: Creating Video Software
- Subject: Re: Creating Video Software
- From: "Douglas A. Welton" <email@hidden>
- Date: Tue, 19 Aug 2003 08:22:13 -0400
In regards to using the QuickTime API, check out sprites.
A sprite is an animated image that is managed by QuickTime. A sprite is
defined once and is then animated by commands that change its position or
appearance.
later,
douglas
on 8/19/03 1:31 AM, email@hidden at email@hidden wrote:
>
I'm about to start development on a new special effects program. It's
>
a very basic program that basically takes images with alpha channels
>
and composites them over a video fame. So, it might take a picture of
>
a lightsaber and composite it on a video frame using alpha channels in
>
the lightsaber image. The user will need to be able to drag the image
>
around on the screen, resize it, etc. The software will create the
>
effects on a frame by frame basis.
>
>
I'm not sure how to approach creating this software. I've looked into a
>
couple different options:
>
>
1) Video frame to NSImage using Quicktime API: One way to create the
>
program would be to convert every frame of the video to an NSImage when
>
the movie was loaded. Then, I could just use NSComposite to composite
>
the effects image over the frame. To handle scrolling through the
>
frames, the software could just update the NSImageView with the correct
>
image for the frame selected. When the user is ready to export it just
>
creates a movie file by combining all the different NSImages using the
>
quicktime API. This route seems to take a lot of overhead in terms of
>
memory, especially if its a large movie.
>
>
2) Use quicktime API: I haven't looked into using the quicktime API
>
that extensively yet. I suppose there are functions that would let me
>
composite the image over a video frame without first converting the
>
video frame to an NSImage. I'm not sure how I would display this on
>
the screen through, especially since the user needs to be able to drag
>
the image around over the video frame.
>
>
3) OpenGL: I'm not sure if this would even work, but what about openGL?
>
Perhaps I could use some openGL functions to composite the images over
>
the video? That might even allow me to composite 3D objects over the
>
video frame?
>
>
4) QuickDraw: Maybe I could use the older carbon QuickDraw functions to
>
do the compositing. QuickDraw seems rather old...
>
>
Well, no matter which route I chose, I still need some way of keeping
>
track of the location of every image on ever frame. I suppose I could
>
just create a NSMutableDictionary with the position of each corner of
>
the image for each frame?
>
>
Any suggestions on which route to take and how to keep track of the
>
image coordinates would be VERY helpful. Thanks!
>
_______________________________________________
>
cocoa-dev mailing list | email@hidden
>
Help/Unsubscribe/Archives:
>
http://www.lists.apple.com/mailman/listinfo/cocoa-dev
>
Do not post admin requests to the list. They will be ignored.
_______________________________________________
cocoa-dev mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/cocoa-dev
Do not post admin requests to the list. They will be ignored.