• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
CATransform3D perspective question
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

CATransform3D perspective question


  • Subject: CATransform3D perspective question
  • From: Nicko van Someren <email@hidden>
  • Date: Wed, 7 Nov 2007 19:27:15 +0000

I'm having some trouble with using Core Animation's 3D transforms to get a perspective effect. Part of the problem is that the exact nature of the transformations is not documented as far as I can tell, and it seems not to behave the way that I've seen these transforms used before.

The CATransform3D type is a 4 by 4 matrix. Typically these matrices are multiplied by the vector (x,y,z,1) to return a vector (x', y', z', s) and the display co-ordinates are (x'/s, y'/s), with z'/s being used for z-buffer values if drawing is done with z-buffering (which I don't think CA supports).

My initial tests supported my view that this was the way CA was going to use the matrix. Rotations about the z axis put sin(t) and cos(t) values into the m11, m12, m21 and m22 slots; translations effect the m4{1,2,3} slots; scales work as expected. Most importantly, the only example I could find for applying a perspective transformation was three lines of code in the CA "Layer Geometry and Transforms" docs, listing 2, which uses the standard technique of putting -(1/distance) into the m34 slot (this exact same code also appears in the CoverFlow example). Adjusting the zPosition value for a layer zooms the layer in and out. So far so good.

The problem is simply this; applying a 90 degree rotation about the y axis does NOT turn the image edge-on. I've hooked up a rotary NSSlider to an action that makes a transform thus:
flip = CATransform3DMakeRotation(rotation, 0, 1, 0);
I'm printing this transform and then setting the transform on a layer containing an image. Rotating the slider rotates the image and if the slider is set to 90 degrees then the transform is the expected: 1.0 in m13, m22, m31 and m44, with zeros everywhere else. This _should_ produce a transform where the output 'x' co-ordinates are invariant of the 'x' input (and in fact should be solely dependent on the 'z' position). Unfortunately, when I do this I get an image which looks like it's turned by about 75 degrees, definitely not edge-on, and very much with the output x co-ordinates dependent on the input x value. I have to turn the slider to about 115 degrees to get the image edge-on.


So, my question is what exactly is the process by which the layer co- ordinates are converted to the display co-ordinates? The documentation on this seems to be missing and while it looks like it should be fairly standard it does not function as expected. Any help would be much appreciated.

	Nicko

_______________________________________________

Cocoa-dev mailing list (email@hidden)

Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com

Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden


  • Prev by Date: Re: [Leopard] Core Data model versioning vs. NSPersistentDocument
  • Next by Date: Re: CALayer doesn't resize immediately
  • Previous by thread: Re: -[NSApplication requestUserAttention:] return value?
  • Next by thread: NSOperation for multiple small pipelines?
  • Index(es):
    • Date
    • Thread