Re: CATransform3D perspective question
Re: CATransform3D perspective question
- Subject: Re: CATransform3D perspective question
- From: John Harper <email@hidden>
- Date: Sat, 01 Dec 2007 09:10:44 -0800
Presumably you did what the samples do and set the perspective matrix
as the sublayerTransform property of the superlayer of the layer
you're rotating?
Both sublayerTransform and transform properties are applied relative
to the anchor point (typically center, though appkit sets the anchor
of layers it creates to the bottom left corner iirc) of the layer the
property is set on. So if you have two layers, one with a perspective
matrix, and its child rotated around the Y axis by 90°, you will only
see the child layer exactly edge on when its center is aligned with
the center of its superlayer. Does that explain what you're seeing?
John
Other than that, your expectations seem correct
On Nov 7, 2007, at 11:27 AM, Nicko van Someren wrote:
I'm having some trouble with using Core Animation's 3D transforms to
get a perspective effect. Part of the problem is that the exact
nature of the transformations is not documented as far as I can
tell, and it seems not to behave the way that I've seen these
transforms used before.
The CATransform3D type is a 4 by 4 matrix. Typically these matrices
are multiplied by the vector (x,y,z,1) to return a vector (x', y',
z', s) and the display co-ordinates are (x'/s, y'/s), with z'/s
being used for z-buffer values if drawing is done with z-buffering
(which I don't think CA supports).
My initial tests supported my view that this was the way CA was
going to use the matrix. Rotations about the z axis put sin(t) and
cos(t) values into the m11, m12, m21 and m22 slots; translations
effect the m4{1,2,3} slots; scales work as expected. Most
importantly, the only example I could find for applying a
perspective transformation was three lines of code in the CA "Layer
Geometry and Transforms" docs, listing 2, which uses the standard
technique of putting -(1/distance) into the m34 slot (this exact
same code also appears in the CoverFlow example). Adjusting the
zPosition value for a layer zooms the layer in and out. So far so
good.
The problem is simply this; applying a 90 degree rotation about the
y axis does NOT turn the image edge-on. I've hooked up a rotary
NSSlider to an action that makes a transform thus:
flip = CATransform3DMakeRotation(rotation, 0, 1, 0);
I'm printing this transform and then setting the transform on a
layer containing an image. Rotating the slider rotates the image
and if the slider is set to 90 degrees then the transform is the
expected: 1.0 in m13, m22, m31 and m44, with zeros everywhere else.
This _should_ produce a transform where the output 'x' co-ordinates
are invariant of the 'x' input (and in fact should be solely
dependent on the 'z' position). Unfortunately, when I do this I get
an image which looks like it's turned by about 75 degrees,
definitely not edge-on, and very much with the output x co-ordinates
dependent on the input x value. I have to turn the slider to about
115 degrees to get the image edge-on.
So, my question is what exactly is the process by which the layer co-
ordinates are converted to the display co-ordinates? The
documentation on this seems to be missing and while it looks like it
should be fairly standard it does not function as expected. Any
help would be much appreciated.
Nicko
_______________________________________________
Cocoa-dev mailing list (email@hidden)
Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
_______________________________________________
Cocoa-dev mailing list (email@hidden)
Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden