• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Creating Video Software
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Creating Video Software


  • Subject: Creating Video Software
  • From: email@hidden
  • Date: Tue, 19 Aug 2003 01:31:26 -0400

I'm about to start development on a new special effects program. It's a very basic program that basically takes images with alpha channels and composites them over a video fame. So, it might take a picture of a lightsaber and composite it on a video frame using alpha channels in the lightsaber image. The user will need to be able to drag the image around on the screen, resize it, etc. The software will create the effects on a frame by frame basis.

I'm not sure how to approach creating this software. I've looked into a couple different options:

1) Video frame to NSImage using Quicktime API: One way to create the program would be to convert every frame of the video to an NSImage when the movie was loaded. Then, I could just use NSComposite to composite the effects image over the frame. To handle scrolling through the frames, the software could just update the NSImageView with the correct image for the frame selected. When the user is ready to export it just creates a movie file by combining all the different NSImages using the quicktime API. This route seems to take a lot of overhead in terms of memory, especially if its a large movie.

2) Use quicktime API: I haven't looked into using the quicktime API that extensively yet. I suppose there are functions that would let me composite the image over a video frame without first converting the video frame to an NSImage. I'm not sure how I would display this on the screen through, especially since the user needs to be able to drag the image around over the video frame.

3) OpenGL: I'm not sure if this would even work, but what about openGL? Perhaps I could use some openGL functions to composite the images over the video? That might even allow me to composite 3D objects over the video frame?

4) QuickDraw: Maybe I could use the older carbon QuickDraw functions to do the compositing. QuickDraw seems rather old...

Well, no matter which route I chose, I still need some way of keeping track of the location of every image on ever frame. I suppose I could just create a NSMutableDictionary with the position of each corner of the image for each frame?

Any suggestions on which route to take and how to keep track of the image coordinates would be VERY helpful. Thanks!
_______________________________________________
cocoa-dev mailing list | email@hidden
Help/Unsubscribe/Archives: http://www.lists.apple.com/mailman/listinfo/cocoa-dev
Do not post admin requests to the list. They will be ignored.
  • Follow-Ups:
    • Re: Creating Video Software
      • From: Chris Hanson <email@hidden>
    • Re: Creating Video Software
      • From: "Douglas A. Welton" <email@hidden>
    • Re: Creating Video Software
      • From: Scott Anguish <email@hidden>
  • Prev by Date: Re: NSMatrix drag & drop
  • Next by Date: Re: Creating Video Software
  • Previous by thread: Re: NSMatrix drag & drop
  • Next by thread: Re: Creating Video Software
  • Index(es):
    • Date
    • Thread