Re: Core Image in FxPlug example
Re: Core Image in FxPlug example
- Subject: Re: Core Image in FxPlug example
- From: Paul Schneider <email@hidden>
- Date: Wed, 29 Apr 2009 17:34:21 -0500
If you're interested in CoreImage development, you might also check
out the quartz-dev mailing list:
http://lists.apple.com/archives/quartz-dev/
And of course, the CI documentation:
http://developer.apple.com/DOCUMENTATION/GraphicsImaging/Conceptual/CoreImaging/ci_intro/ci_intro.html
Also, the CI Kernel language is documented here:
http://developer.apple.com/DOCUMENTATION/GraphicsImaging/Reference/CIKernelLangRef/Introduction/Introduction.html
And it is a subset of the GL shading language:
http://www.opengl.org/documentation/glsl/
There are lots of GLSL examples floating around out there.
- Paul
On Apr 29, 2009, at 4:59 PM, Darrin Cardani wrote:
Brian,
I don't know of any examples of what you want, but it's not too
difficult. I'll try to address it here.
In a Core Image kernel, you pass in a sampler that samples the input
image, and samplers that will sample any other images that your
filter uses. So in the example I sent to the list yesterday, there
were 2 samplers, one for the input image, and one for the map image:
kernel vec4 Displacement (sampler image, sampler mapImage, float
scale)
{
vec4 offset = sample (mapImage, samplerCoord (mapImage)); // <-
Reads from the map image
offset *= scale;
return sample(image, samplerCoord(image) + offset.xy); // <- Reads
from the main input image
}
To read a sample from a sampler, you simply call the "sample ()"
function:
vec4 aSample = sample (someSampler, someCoordinates);
The coordinates are a vector, and can pretty much be any real
numbers. You can get the coordinate of the current pixel in any
image by asking the sampler for its current coordinates using the
samplerCoord () function:
vec2 coords = samplerCoord (someSampler);
You can then manipulate those coordinates in any way you want. You
can add to them, multiply them, use them in dot products, etc. Once
you've manipulated your coordinates, then you can sample from one of
the samplers. You can read more than one if you want. So a simple
convolution might look like this:
kernel vec4 multiplyEffect(sampler someSampler)
{
// Get the current sampler coordinates
vec2 coords = samplerCoord (someSampler);
// Subtract 1 from the x coordinate
vec2 leftCoord = coords - vec2 (1.0, 0.0);
// Add 1 to the x coordinate
vec2 rightCoord = coords + vec2 (1.0, 0.0);
// Sample from the new coordinates, weight them, and add the
results together, creating a very mild blur
return sample (someSampler, leftCoord) * 0.25 +
sample (someSampler, coords) * 0.5 +
sample (someSampler, rightCoord) * 0.25;
}
Let me know if that answers your question.
Thanks,
Darrin
On Apr 29, 2009, at 1:15 AM, Brian Gardner wrote:
Hi Darrin.
Thank you for your examples. Very helpful.
Is there any example source code for
the Displacement Distortion CI filter which you mention below?
Which samples 3 pixels from one image,
and 1 pixel from another image?
That would be very helpful for me to see.
There are very few examples which use
multiple (or reference) images,
and take multiple samples per image.
Is there any such sample code available?
(Like of the Displacement Distortion filter you describe, below.)
-- Brian
On Apr 28, 2009, at 9:01 AM, Darrin Cardani wrote:
Glad it helps! It looks to me like Displacement Distortion is
using the magnitude and direction of edges in the map image to
offset pixels in the source image. I believe it first converts the
map image to grayscale, then finds the difference of each pixel
with its neighbors in the x and y directions. It multiplies the x
and y differences by the scale parameter and uses the resulting
vector as an offset from the current pixel in the source image.
So let's say the grayscale version of your map image looked like
this:
0.75 0.89
0.23 0.52
It takes (0.75 - 0.89) = -0.14 as the x offset
It takes (0.75 - 0.23) = .52 as the y offset
It then multiplies those values by the scale parameter. Let's say
it's set to 25, so it comes up with an offset of (-3.5, 13). It
adds that to the current (x,y) coordinate and samples the input
image at (x - 3.5, y + 13).
Does that make sense?
Darrin
On Apr 27, 2009, at 5:53 PM, Patrick Sheffield wrote:
Thank you Darrin - this helps a lot.
I wonder if you could shed some light on a specific CoreImage
filter - specifically DisplacementDistortion...
In Shake and in FxScript, the X-Displacement is governed by one
channel in the displace image and the Y-Displacement by another.
I can't find any documentation on the CI filter as to how it
handles this.
Thanks,
Patrick
--
Darrin Cardani
email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Pro-apps-dev mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
--
Darrin Cardani
email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Pro-apps-dev mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Pro-apps-dev mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden