Академический Документы
Профессиональный Документы
Культура Документы
2007-06-28
State Key Lab of CAD&CG, ZJU
z
z
Shape from X
z
z
Many cues can be used for inferring object shapes from images.
Based on the number of images used, there are two categories of
methods:
z
z
(a)
(b)
Shape can be recovered from a single image by human visual system based on
the shading information
To estimate the shape from a single image, some assumptions have to be made,
e.g. constant albedo (surface color)
L(P ) = R ,i = i in
z
Illuminance
Luminance
Intensity
p=[x, y]T is
d
E (P ) = L(P) cos 4
f
z
Assumptions:
z
If we neglect the constant term and if we assume the optical system has
been calibrated to compensate the cos4 effect, and in addition,
if we assume that all the visible points of the surface receive direct
illumination, we have the fundamental equation for shape from shading
E (P ) = R , i (n)
z
Notice that the image intensity is determined only by the surface normal vector
[ p, q,1]
2
1 + p2 + q
Surface
Orientation
Z
(0,1,Zy )
(1,0,Zx )
x y
Z = Z(x, y)
Depth
X
Y
dy
dx
IMAGE PLANE
Assume that
z
z
the scene is far away from the camera, we can use a weakperspective camera model to describe the projection.
the average depth of the scene is Z0.
Z
Z x
Z f
Z f
1, 0, X = 1, 0, x X = 1, 0, x Z and 0,1, y Z
0
0
We denote the above vectors as [1, 0, p]T and [0, 1, q]T . The
normal vector is the cross product of these two vectors, therefore
n = [1, 0, p ] [ 0,1, q ] =
T
[ p, q,1]
1+ p + q
2
E ( x, y ) = R , i ( p, q ) =
1+ p + q
2
i [ p, q,1]
T
Assumptions
z
z
z
z
Given the reflectance map of the surface R , i ( p, q ) , and full knowledge of the
parameters and i relative to the available image, reconstruct the surface slopes, p
and q, for which
E ( x, y ) = R ,i ( p, q )
z
Z Z 0
Z Z 0
p=
and q =
x f
y f
p=
Z
Z
and q =
x
y
Assumption
z
z
z
Therefore
Algorithm_Approximate_Albedo_Illuminant
z Compute the average of the image intensity <E> and of its
square, <E2>
z Compute the average spatial image gradient <Ex, Ey>
z Estimate the albedo and the illuminant angle as
E ( x, y ) = R , i ( p, q ) =
1 + p2 + q2
iT [ p, q,1]
Euler-Lagrange equation
z
If J is defined as
where q = dq / dt
The derivative of J vanishes if the Euler-Lagrange
equation
Euler-Lagrange equation
z
where
Therefore,
can be simplified as
An iterative algorithm
z
An iterative algorithm
z
pij and qij can be solved by starting from some initial solution at
step 0, and advancing from step k to step k+1 using the updating
rule
(1)
Enforcing integrability
z
Solution:
z
Enforcing integrability
z
Solution:
(2)
Enforcing integrability
z
Output Z, p, q
Experimental results
z
z
( x)
Data
Nonlinear ?
K(X,Z)
Kernel
Function
Kernel
Matrix
SVM ?
PA
Algorithm
f ( x) = i k ( xi , x)
Pattern
Function
g ( x ) = w , x = w ' x = wi xi
i =1
S = {( x1 , y1 ), ( x2 , y2 ),
z
( xl , yl )}
f ( x, y) = y g ( x ) = y w, x 0
i =1
i =1
L( g , S ) = L( w , S ) = ( y g ( xi )) 2 = L( g , ( xi , yi )) 2
Least squares approximation
2
2
= ( y Xw )( y Xw )
We have
L( w , S )
= 2 X ' y + 2 X ' Xw
w
Normal equations
X ' Xw = X ' y
z
w = ( X ' X ) 1 X ' y
Dual representation
w = ( X ' X ) 1 X ' y = X ' X ( X ' X ) 2 X ' y = X '
l
g ( x ) = w , x = w ' x = ' Xx = i xi , x
i =1
Ridge regression
z
min L ( w , S ) = min w + ( yi g ( xi )) 2
2
i =1
Primal:
X ' Xw + w = ( X ' X + I n ) w = X ' y
w = ( X ' X + In ) X ' y
1
Ridge regression
z
w = 1 X '( y Xw ) = X '
= 1 ( y Xw )
G = XX ' or Gij = xi , x j
= ( y XX ' )
( XX '+ I l ) = y
= (G + I l ) 1 y
Dual:
g ( x ) = w, x =
i =1
i =1
x
,
x
=
x
,
x
=
y
'(
G
+
I
)
k
i i
i i
n
Ridge regression
z
g ( x ) = w, x =
i =1
i =1
x
,
x
=
x
,
x
=
y
'(
G
+
I
)
k
i i
i i
n
ki = xi , x
z
An embedding map
: x
S = {( x1 , y1 ), ( x2 , y2 ),
( x) F
( xl , yl )}
S = {( ( x1 ), y1 ), ( ( x2 ), y2 ),
( ( xl ), yl )}
f ( x, y) = y g ( x ) = y w, ( x ) 0
G = XX ' or Gij = ( xi ), ( x j )
ki = ( xi ), ( x )
Kernel function
z
( x, z ) = ( x ), ( z )
z
( x) F
Kernel Example
z
2
22 2
= x1 z1 + x2 z2
= x, z
More generally: x , z
x, z
=
= xi zi
j =1
j1 , j2 ,
jd =1
x j1 x j2
,d
x jd z j1 z j2
z jd = ( x ), ( z )
Characterization of kernels
z
A function
:XX
which is either continuous or has a countable domain, cab be
decomposed
( x, z ) = ( x ), ( z )
into a feature map into a Hilbert space F applied to both its
arguments followed by the evaluation of the inner product in F if
and only if it satisfies the finite positive semi-definite properties
Kernel PCA
z
Process:
K ij = ( xi , x j ), i, j = 1,...l
K K 1l jj'K 1l Kjj '+ l12 ( jKj ') jj '
[V , ] = eig ( K )
j =
normalization
Eigen-analysis
v j , j = 1,...k
k
j
xi = i ( xi , x )
i =1
j =1
Output: transformed data S = { x1 , x2 , , xl }
l
Kernel matrix
Process:
K ij = ( xi , x j ), i, j = 1,...l
K K 1l jj'K 1l Kjj '+ l12 ( jKj ') jj '
[V , ] = eig ( K )
k
=
s
j =1
(v j y s )v j , s = 1,...m
f s ( x ) = is ( xi , x ), s = 1,..., m
i =1