Re: Custom Core Image filter help
Re: Custom Core Image filter help
- Subject: Re: Custom Core Image filter help
- From: Daniel Thorpe <email@hidden>
- Date: Wed, 20 Feb 2008 18:07:38 +0000
Hey Nicko, thanks for getting back to me...
I think I've got around the problem of having a kernel for different
sized images, although it is a bit of a hack. I've written a script
that generates a .cikernel file containing as many kernel functions I
want with the correct width set as a local variable in each one. Then
when I load the kernels in the CIFilter subclass, I store an array of
all the kernels, and select the correct one in the outputImage method.
This seems to work, or at least it compiles and runs without any errors.
I am having some problems with my kernel code however, which I've
changed as you suggested (I had already done this in my Obj-C
implementation)...
This is my kernel code:
kernel vec4 ComputeTchebichefMomentsForN_.lf(sampler src, sampler
tchebichef) {
const float N = %.1lf;
float x, y;
vec4 ans = (0.0, 0.0, 0.0, 1.0);
for(y=0.0; y<N; y++) {
vec2 ny;
ny.x = y;
ny.y = destCoord().x;
vec4 tmp = (0.0, 0.0, 0.0, 1.0);
for(x=0.0; x<N; x++) {
vec2 mx;
mx.x = x;
mx.y = destCoord().y;
vec2 xy;
xy.x = x;
xy.x = y;
tmp += ( sampleWorking(tchebichef, mx) * sampleWorking(src, xy) );
}
ans += ( sampleWorking(tchebichef, ny) * tmp );
}
return ans;
}
and
vec4 sampleWorking(sampler src, vec2 pos) {
return sample(src, samplerTransform(src, pos));
}
Unfortunately, this doesn't seem to work yet, as it crashed
QuartzComposer instantly (as in as soon as I type in the kernel stuff
with a suitable N. Which leads me to think, that it's syntactically
correct, but essentially wrong....
So... have you got any other thoughts?
Thanks a lot for your help thus far - it's much appreciated!
Cheers
Dan
On 20 Feb 2008, at 10:12, Nicko van Someren wrote:
On 18 Feb 2008, at 12:42, Daniel Thorpe wrote:
Hello everyone, I'm a complete newbie when it comes to Core Image,
so I'm hoping some experienced people might be able to help...
We can but try :-)
I'm trying to compute discrete orthogonal image moments, using
Tchebichef polynomials,
...
T(m,n) = SUM_x { SUM_y { t(m, x) * t(n,y) * f(x,y) }}
...
I'm not sure if this can be done in Core Image because the kernel
will need to loop over the entire input image domain... and to use
the for control statement, "the loop condition must be inferred at
the time the code compiles" - which leads me to think that such a
kernel will not work for different sized images?
The big pain here is that Apple don't support the OpenGL
matrixCompMult function in their dialect of the shading language.
That said, observe that CIKernel objects get initialised with a
string, which you can construct on the fly, so needing to set the
size at 'compile' time only means at the time that the kernel object
is created.
Apple do support the 'dot'. I would therefore suggest that you re-
write your equation like this:
T(m,n) = SUM_y { t(n,y) * (SUM_x { t(m, x) * f(x,y) } ) }
You can then construct an 'image' representing the t function, with
the pixel at x,y representing t(y,x). This allows you to compute
the inner sum by selecting row m from the image and taking its dot-
product with each row of the input image. You can then use a for
loop to sum up those sums scaled by the values taken from row n.
I hope this helps.
Nicko
_______________________________________________
Cocoa-dev mailing list (email@hidden)
Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden