Академический Документы
Профессиональный Документы
Культура Документы
INDE
X
S.n
o
Page
List of Experiments
No.
10
18
23
29
33
37
41
63
10
69
Attendance(Outof10classes)
InternalAssessmentMarks
Date
Instructors Signature
Experiment 1
Aim: To study various spatial descriptions and transformations
using Pete r Corke toolbox in Matlab.
>>T1 = se2(1, 2, 30*pi/180)
%It represents a translation of (1, 2) and a rotation of 30
w.r.t world frame T1 =
0.8660 -0.5000 1.0000
0.50000.8660 2.0000
0
0 1.0000
>>trplot2(T1, 'frame', '1', 'color', 'b', 'axis', [0 5 0 5]);
%This option specify that the label for the frame is {1} and it is colored
blue
>>hold on;
>>T2 =
se2(2, 1, 0)
T2 =
1 0 2
0 1 1
0 0 1
>>trplot2(T2, 'frame', '2', 'color', 'r', 'axis', [0 5 0 5]);
>>hold on;
%Now displacement of (2, 1) has b een applied with
respect to frame {1} >>T3 = T1*T2
T3 =
0.866
- 2.232
0
0.5000
1
0.500 0.866 3.866
0
0
0
0
0 1.0000
>>trplot2(T3, 'frame', '3', 'color', 'g',
'axis', [0 5 0 5]); >>hold on;
%The non-commutativity of composition is
shown below >>T4 = T2*T1
T4 =
(a) (i)Using a ZYZ, alpha, beta, gam ma, Euler angle convention, writes a Matlab
progra m the rotation matrix A RB when the user enters Euler angles alpha, beta,
and gamma. E.g. = 10, = 20, = 30. Also demonstrate the inverse prop erty of
rotational matrix which is B R A A RB 1 = ( A R B ).
Code:
>> dr = pi/180;
ang = input('enter alpha, beta and gamma in degrees');
alpha = ang(1)*dr; %Converting angle in degree to radian
beta = ang(2)*dr;
gamma = ang(3)*dr;
Tz = rotz(alpha); % Rotate a bout z-axis by angle alpha
Ty = roty(beta);
% Rotate a bout current y-axis by angle beta
Tx = rotz(gamma); %Rotate about current z-axis by angle gamma
Rmult = Tz*Ty*Tx; %Resultan t Rotational matrix
Trpy = eul2tr(alpha,beta,gamma); %Changing euler angle to
translational matrix enter alpha, beta and gamma in degrees[10 20
30]
>> Tz
4
Tz =
0.9848 -0.1736 0
0.17360.9848 0
0
0 1.0000
> Tx
Tx =
0.866
0
0.5000
0.500
0
0.8660
0
0
0
1.000
0
>> Ty
Ty =
0.9397
0
0.342
0
>>
Rmult
Rmult
=
0 0.3420
1.000
0 0
0
0.939
7
0.7146 0.6131
0.771
0.6337 3
0.296
2
0.1710
>>
Trpy
Trpy
=
0.7146 0.6131
0.771
0.6337 3
0.296
2
0.1710
0
0
0
>>
R1=inv(Rmult
)
R1 =
0.633
0.7146 7
0.7713
0.613
0.3368
0.0594
0.9397
0.3368
0.0594
0.9397
1.0000
0
%
RB
matrix
0.2962
0.1710
Inverse Rotational
1
0.059
0.3368 4
0.9397
>>
R2=Rmult.'
R2 =
0.633
0.7146 7
0.2962
0.613
1
0.7713 0.1710
0.059
0.3368 4
0.9397
ang =
0.17450.3491 0.5236
>>
ang/dr
ans =
P = [1 0 1], calculate
P.
Pa =
1.2817
0
0.5977
A
(b)(i) Write a Matlab program to calculate the homogenous
transformation matrix TB when the user
A
enters ZYX Euler angles , , and the position vector PB . Test with case =10, =20, =30 and
A
PB = [1 2 3].
>> DR=pi/180;
angs=input('Enter alpha beta gamma');
alpha=angs(1)*DR; %Converting angle in degree to
radian beta=angs(2)*DR;
gamma=angs(3)*DR;
Pba=input('enter Pba'); %Enter A PB
Enter alpha beta gamma[10 20 30]
enter Pba[1 2 3]'
>> Trpy=rpy2tr(gamma,beta,alpha);
%Changing roll, pitch and yaw angles to
translational matrix
Ttrn=transl(Pba);
% Translational matrix of A PB
Tba=Ttrn*Trpy;
% ATB corresponding to new rotated frame Trpy
>> Pba
Pba =
1
2
3
>> Trpy
Trpy =
0.9254 -0.1632
0.3420 0
6
0.31880.8232 0.4698
0.20490.5438 0.8138
0
00 1.0000
> Tba
Tba =
0
0
0.925
1.000
4
0.1632 0.3420
0
0.318 0.823
- 2.000
8
2
0.4698
0
3.000
0.2049 0.5438 0.8138
0
0
0
0 1.0000
A
B
A
(ii)For =20 (=0, =0), PBis [3 0 1] and P= [1 0 1]. Use Matlab to calculate P.
Pb = input('enter Pb column wise');
Pa =
%Calculating A P using ATB and B P
Tba*[Pb;1];
Ttrn1 = transl(3,0,1)
Tba1 =
rpy2tr(0,20*DR,0);
Tba2 = Ttrn1*Tba1
enter Pb column wise[1 0 1]'
Ttrn1
=
1 0 0 3
0 1 0 0
0 0 1 1
0 0 0 1
Tba2
=
0.342
0.9397
0
0 3.0000
1.000
0
0
0 0
-0.3420
0 0.9397 1.0000
0
00 1.0000
(iii)
Also show
that
AT
and AT
1
B
= I
>>
Tba_inv=inv(Tba
);
eye(4)Tba*Tba_inv
eye(4)Tba_inv*Tba
7
ans =
1.0e-15
*
0
0 0.0555 0.1110
0.0278
0
0
0
0.0555 0.1110
0
0
0
0
0
0
ans =
1.0e-15
*
0
0.111
-0.0555 0.0833
0
0.0139
0 0.0555
0
0.0833 0.0555
0
0
0
0
0
0
C
A
A
(c)Write a matlab program
to calculate TC and TA given the fact that TB and
A
B
from part a. Define TB to be the result from part (a) and TC from part b.
0.6715
2.6251
TC are obtained
0.7343
0.6537
-0.1829
-2.4413 -0.0993 0.3701
0.9237
-6.5265
0.6715
-0.6601 0.3367 -2.2051
0
00 1.0000
> Tac2
Tac2 =
0.7343
0.6537
-0.1829
-2.4413 -0.0993 0.3701
0.9237
-6.5265
0.6715
-0.6601 0.3367 -2.2051
0
00 1.0000
> Tcb2
Tcb2 =
0.9254 -0.1632
0.3420
1.0000
0.3188 0.8232 -0.4698
2.0000 -0.2049 0.5438
0.8138 3.0000
0
00 1.0000
> Tba2
Tba2 =
0.9254 -0.1632
0.3420
1.0000
0.3188 0.8232 -0.4698
2.0000 -0.2049 0.5438
0.8138 3.0000
0
0
0 1.0000
Experiment 2
Aim: Write down a Matlab program for the planar 3-Degree of freedom RRR
robot with the
0
T3, with
following parameters L1 = 4m, L2=3m, L3 = 2m.To derive the
the
forward kinematics i.e.
following input cases:
T
= [0 0 0]
T
= [10 20 30]
= [90
90
90
1
0
1
1
>> L.A(0.5)
ans =
0.8776 -0.0000 0.4794 0.1755
0.000
0.4794
0
-0.8776 0.0959
1.0000
0
0.0000 0.1000
0
0
0 1.0000
>>
L.RP
ans =
R
>> L.a
ans =
0.2000
>> L.offset =
0.5 ;
>>
L.A(0)
+--+-----------
| 1|
| 2|
+--+-----------
grav =
0
9.81
+---------+
+----------q1| 0|
1|
0|
q2| 0|
1|
0|
+---------+
+----------0 base = 1 0 0 0 tool = 1
000
0100
0100
0010
0010
000
1
0001
+-----------
+-----------
>>
mdl_twolink
twolink
=
two link (2 axis, RR, stdDH,
slowRNE) from Spong,
Hutchinson, Vidyasagar;
1
2
+ ---+----------+
+ ----------| j | theta |
d | a | alpha |
++--------- +------------+ ----------| 1|
q1|
0| 1|
0|
| 2|
q2|
0| 1|
0|
++--------- +------------+ ----------grav =
0 base = 1 0 0 0 tool =
010
0
0
0100
001
9.81
0
0010
0001
0001
>>
twolink.n
ans =
-------- ---+-------- --
+----------
+----------
+
1000
>> links =
% get link information
twolink.links links
=
1, alpha= 0 (R,stdDH)
0,
theta=q1,
a=
1, alpha= 0 (R,stdDH)
0,
d=
a=
theta=q2,
d=
>> twolink.fkine([0 0]) % fkine gives the pose (4x4) of the robot endeffector as a homogeneous transformation
ans
=
1 0 02
0 1 00
0 0 10
0 0 0 1
> twolink.fkine([p
i/4 - pi/4]) ans =
1.000
1.707
0
0 0 1
1.000
0.707
0
0 0 1
1.000
0
0
0 0
00
0
1.0000
% plot displays the robot
orientation in 3D >> twolink.plot([0
0]) % for theta = [0 0]
1
3
mdl_puma5
60 >>
p560
p560 =
Puma 560 (6 axis, RRRRRR, stdDH,
slowRNE)
viscous friction; params of
8/95;
----------+ ---+
+
+
|j|
theta | d | a | alpha |
+- +-------- +--------- ----+
| 1|
q1|
0| 0|
1.571|
0|
| 2|
q2|
0.4318|
0|
-------- ---+-------- --
+-----------
+----------
+-----------
0.15|
| 3|
q3|
0.0203|
-1.571|
0.431
| 4|
q4|
0| 1.571|
8|
| 5|
q5|
0| 0| -1.571|
| 6|
q6|
0| 0|
0|
+- +-------- +--------- ----+
grav
0 base = 1 0 0 0 tool = 1
+----------
+-----------
=
0
9.81
0100
0010
0001
000
0100
0010
0001
1
4
>>
p560.plot(qz)
>>
p560.plot(qr)
>>
p560.plot(qs)
>>
p560.plot(qn
)
1
5
Code:
>> B(1) = Link([0 0
4 0])
B=
0,
theta=q, d=
a=
>> B(2) = Link([0 0
3 0])
B=
theta=q1,
d=
0
0, alpha= (R,stdDH)
0,
a= 0, alpha=
theta=q2,
0,
d=
a= 4, alpha=
>> B(3) = Link([0 0
2 0])
B=
theta=q1,
d=
0,
a= 4, alpha=
theta=q2,
d=
0,
a= 3, alpha=
0
(R,stdDH
)
0
(R,stdDH
)
0
(R,stdDH
)
0
(R,stdDH
)
0
(R,stdDH
)
theta=q3,
0,
d=
a= 2, alpha=
>> SKM = SerialLink(B, 'name',
'myBot')
SKM =
Tinku (3 axis, RRR, stdDH,
slowRNE)
+---------+--- +----------+
-+
|j|
theta |
d | a | alpha |
+---------+--------+--------+--- +-----------+
--| 1|
q1|
0| 4|
0|
| 2|
q2|
0| 3|
0|
| 3|
q3|
0| 2|
0|
+---------+--------+--------+--- +-----------+
--grav 0 base = 1 0 0 0 tool = 1 0 0
=
0
0
0100
0100
001
9.81
0010
0
0001
0001
> dr = pi/180;
> q1 = [0 0 0]*dr
q1 =
0 00
----------+-----------
0.866
0
0 0.5000
3.9266
0 0 1.0000
0
0
0 0 1.0000
> T30_3 = SKM.fkine(q3)
T30_3 =
0.000
3.000
0
1.0000
0
0
1.000
2.000
0
-0.0000 0
0
0
0 1.0000
0
0
0
0 1.0000
SKM.plot([0 0 0], 'workspace', [-5 5 -5 5 -5 5])
1
7
Experiment 3
Aim: To study the Inverse Kinematics of a planar 3 degree of
freedom RRR robot using Peter Corke toolbox
Theory:
We have shown how to determine the pose of the end-effector given
the joint coordinates and optional tool and base transforms. A
problem of real practical interest is the inverse problem: given the
desired pose of the end -effector E what are the required joint
coordinates? For example, if we know the Cartesian pose of an object,
what joint coordinates does the robot need in order to reach it? This is
the inverse kinematics problem which is written in functional form as
In general this function is not unique and for some classes of
manipulator no closed form solution exists, necessitating a numerical
solution.
> mdl_puma560
> qn
qn =
0 0.7854 3.1416
0 0.7854
>> T = p560.fkine(qn) % fkine gives the pose (4x4) of the robot endeffector as a homogeneous transformation
T=
-0.0000
0.0000
1.0000
0.5963
-0.0000
1.0000
-0.0000 -0.1501 -1.0000
-0.0000 -0.0000 -0.0144
0
00 1.0000
> qi = p560.ikine6s(T) % ikine6s gives the joint coordinates corresponding to the
robot end-effector
with pose T
qi =
-0.00000.7854 3.1416 3.1416 -0.7854 -3.1416
>> qi =
p560.ikine6s(T)
qi =
-0.00000.7854 3.1416 3.1416 -0.7854 -3.1416
>>
qz
qz
=
0 0 0 0 0 0
>> T = p560.fkine(qz)
T=
1.0000
0 0.4521
1
8
0.150
0 1.0000
0
0
0.431
0
01.0000
8
0
00 1.0000
> qi = p560.ikine6s(T)
qi =
0 1.5249 -3.0476 -3.1416 -1.5228 -3.1416
> qz=[pi/4,0, pi/2, pi/4,0,0]
qz
=
0.7854
0 1.5708 0.7854
>> T = p560.fkine(qz)
T=
-0.5000 -0.5000 -0.7071
0.1061
0.5000
0.5000
-0.7071 -0.1061
0.7071 -0.7071
0.0000
0
00 1.0000
0.0203
> qi = p560.ikine6s(T)
Warning: point not
reachable > In
SerialLink.ikine6s at 201
qi =
NaN NaN NaN NaN NaN NaN
>> qi = p560.ikine6s(T, 'ru') % gives the original set of joint angles by
specifying a right handed configuration with the elbow up
qi =
2.6486 -2.3081 3.1416 -2.4673 -0.8604
-0.4805 >> p560.plot(qi)
>> p560.ikine6s( transl(1, 0, 0) ) % Due to mechanical limits on joint angles and possible
collisions between links not all eight solutions are physically achievable; no solution can be achieved
ans =
0.99920.7873 -1.8455
Code:
Write down a Matlab program to solve planar RRR robot inverse kinematics
problem having the length parameters L1=4m, L2=3m, L3=2m.
For the following relative pose
(a) 0TH = [1 0 0 9; 0 1 0 0; 0 0 1 0; 0 0 0 1]
(b) 0TH = [0.5 -0.866 0 7.5373; 0.866 0.6 0 3.9266; 0 0 1 0; 0 0 0 1]
(c) 0TH = [0 1 0 -3; -1 0 0 2; 0 0 1 0; 0 0 0 1]
(d) 0TH = [0.866 0.5 0 -3.1245; -0.5 0.866 0 9.1674; 0 0 1 0; 0 0 0 1]
(a) TH0
=
1 0 0 9
0 1 0 0
0 0 1 0
0 0 0 1
dr =
pi/180;
q = anadi.ikine(TH0, [0 0 0], [1 1 0 0 0
1 1])
q
1 =
0 0 0
q = anadi.ikine(TH0, [90 -90 90]*dr, [1 1
1 0 0 0 1])
q
1 =
1.0e-03 *
0.1934 -0.4512 0.2578
anadi.plot(q1, 'workspace', [-8 8 -8 8 -8 8])
>>anadi.fkine(q
1) ans =
1.000
0 -0.0000
0
0.000
0
1.0000
0
0
0 1.0000
0
00 1.0000
(b)
9.000
0
0.000
0
0
3.9266; 0 0 1 0; 0 0 0 1] TH1 =
0.500
0 -0.8660
0.866 0.6000
7.537
0
3
0 3.926
0
6
0
0 1.0000
0
0
00 1.0000
> q1 = anadi.ikine(TH1, [0 0 0], [1 1 0 0 0 1])
matrix not orthonormal rotation matrix
2
0
1.0000 -0.0866 0
-0.0866 1.1100 0
0
0 1.0000
(c)
TH2 = [0 1 0 -3; -1 0 0 2; 0
0 1 0; 0 0 0 1] TH2 =
0 1 0 -3
-1 0 0 2
0 0 1 0
0 0 0 1
>> q2 = anadi.ikine(TH2, [0 0 0], [1 1
0 0 0 1])
0
1])
q2
=
1.5708 1.5708 1.5708
>>anadi.plot(q2)
TH3 =
0.8660 0.5000 0 -3.1245
-0.5000 0.8660 0 9.1674
0
0 1.0000 0
0
00 1.0000
> q3 = anadi.ikine(TH3, [0 0 0]*dr, [1 1
0 0 0 1]) q3 =
1.6852
1.6649
2.4095
anadi.fkine(q3)
ans =
0.8660 0.5000 0 -1.6596
-0.5000 0.8660 0 2.3529
0
0 1.0000 0
0
0
0 1.0000
q
3 =
2
1
1.0e+04 *
-2.67466.4932 -3.7081
>>anadi.fkine(q3)
ans =
0.8769 0.4808 0 -1.6122
-0.4808 0.8769 0 2.3772
1.000
0
0 0
0
0
0
0 1.0000
q3 = anadi.ikine(R, [0 0 0], [2 0 0
0 0 1])
%for different
mask
q3 =
-2.0655 -0.9075
2.4494
>>anadi.fkine(q3)
ans =
0.8660
0.5000
0 -3.1245
0.866
0
-0.5000 0
-5.0238
0
0 1.0000
0
0
0
0 1.0000
>>anadi.plot(q3)
The different result for the same post 0THis due to different usage of Initial
estimate (Q0) and mask
(M).
2
2
Experiment 4
Aim: To perform forward kinematics of 3 DOF planar robot, Stanford
manipulator , puma robot for the given DH parameters using robo
analyzer and calculate the end effectors position for a given set of
joint values using matlab and verify the result with robo analyzer
output.
Theory:
Forward Kinematics: To determine the position and orientation of the endeffector given the values for the joint variables of the robot.
Using Roboanalyser: RoboAnalyzer is a 3D Model Based Robotics Learning
System developed using OpenTK and Visual C#.
The
figures
have
suitably
above
been
2
3
1. In this block, one can select the DOF and type of robot from tabs provided
on the far left. The center table shows the DH parameter for the robot.
These can be changes as per the requirement.
2. Here 4 tabs are present. One can see a DH paremeter animation for the
robot in Visualize DH tab. Link Config and EE Config gives the
transformation matrix for a particular link and for base to end-effector
respectively.
3. Once the robot is configured, press the FKin button to completer the
forward kinematic analysis. Then press the play button to view the
trajectory of robot with initial and final position as specified in block 1.
One can also change thetime duration and no. of steps taken by robot for
reaching from initial to final position.
4. Here one can see the actual robot, coordinate axis w.r.t each link, and the
trajectory plotted after hitting the play button.
5. This screen is obtained by hitting the Graph tab on top of the screen.
Here one can see the variation in position for each link separately with
time and the joint parameters such as joint value, velocity, acceleration
etc.
6. This block shows the graph for selected parameter in block 5.
ans =
theta=q, d= 0, a= 0, alpha=
> L(1) = Link([0 0 0.3 0]);
> L(2) = Link([0 0 0.25 pi/2]);
> L(3) = Link([0 0 0.1 0]);
0 (R,stdDH)
L
=
theta=q1,
0,
d=
a=
0.3, alpha= 0 (R,stdDH)
theta=q2,
0,
1.571
d=
a= 0.25, alpha=
(R,stdDH)
theta=q3,
0,
d=
a=
0.1, alpha= 0 (R,stdDH)
>> three_link = SerialLink(L, 'name', 'three
link')
three_link
=
three link (3 axis, RRR, stdDH,
slowRNE)
+- ---------------------- --------- +--------- +
++
---+
| j | theta |
d|
a | alpha |
+- ---------------------- --------- +--------- +
++
---+
| 1|
q1|
0|
0.3|
0|
| 2|
q2|
0| 0.25| 1.571|
| 3|
q3|
0|
0.1|
0|
+- ---------------------- --------- +--------- +
++
---+
grav
=
0 base = 1 0 0 0 tool = 1
000
0
0100
0100
9.81
0010
0010
0001
0001
>> DR=pi/180;
2
4
>> q1=[ 0 90
0]* DR
q1 =
1.570
0
8
0
>> T30_1=fkine(three_link,
q1)
T30_1
=
-0.0000 0.300
0.0000
1.0000 0
- 0.350
1.0000 0.0000 0.0000 0
1.000
0
0
0.0000 0
0
0
0 1.0000
Stanford manipulator
>> Link
ans =
theta=q, d= 0, a= 0, alph a= 0 (R,stdDH)
> L(1) = Link([0 0.762 0 -pi/2])
> L(2) = Link([0 0.393412 0 -pi/2]);
> L(3) = Link([-pi/2 0 0 0 1]);
> L(4) = Link([0 0.2268 0 -pi/2]);
> L(5) = Link([0 0 0 -pi/2]);
> L(6) = Link([0 0.4318 0 0])
L=
theta=q1,
0.762, a=
0,
-1.571
d=
theta=q2,
d=
0.3934, a=
theta= -1.571,
d=q3, a=
alpha= (R,stdDH)
0,
-1.571
alpha= (R,stdDH)
0,
0
alpha=
(P,stdDH)
2
5
theta=q4,
d=
theta=q5,
d=
theta=q6,
d=
0.2268,
a=
0, a=
0.4318,
a=
0, alpha= -1.571
(R,stdDH)
0,
-1.571
alpha= (R,stdDH)
0,
0
alpha=
(R,stdDH)
grav 0 base = 1 0 0
=
0
tool = 1 0 0 0
0
0100
0 100
9.81
0010
0010
0001
0001
>> q1=[ 0 -pi/2 0.635 0 pi/2 pi/2]
q1 =
-1.5708
0 1.5708
0
0.6350
1.5708
>> T60_1=fkine(Stan_link, q1)
T60_1
=
-0.0000
0.000
1.0000
0 0.8618
1.000
0.0000 0.0000
0 -0.0384
1.000
-0.0000
0
0.0000
0.7620
0
0
0
1.0000
2
6
Puma robot
|
+- ----------- ------------ +
+ +----------| 1|
q1| 0.7742|
| 2|
| 3|
| 4|
| 5|
-------+--+
0| -1.571|
0.506
q2| 0.1016|
6| 3.142|
1.571
q3| -0.0381| 0.02| |
q4| 0.268|
0| 1.571|
-1.
q5|
0|
0| 571|
2
7
| 6|
q6| 0.05842|
0|
0|
+----------------+--+---------+-- +
+-----------grav
0 base = 1 0
100
=
0 0 tool = 0
010
0
0100
0
9.8
1
0010
001 0
0001
0001
>> q1=[ 0 0 0 0
0 0]
q1 =
0 0 0 0 0 0
>>
T60_1=fkine(Puma_link,
q1)
T60_1
=
1.0000
0
0 0.5266
1.000
0
0
0 0.1397
1.000
0
0
0 1.1006
0
0
0 1.0000
2
8
Experiment 5
Aimofhe experiment:
analytical expression for the inverse kinematics problem of a 2 DOF planar robot and
verify the solution from Matlab program.
SOLUTIONS OF IKIN
To select a robot and view the solutions of its Inverse Kinematics, the following are the steps
1.
2.
3.
4.
5.
6.
Code:
The solution may be obtained from Matlab using trigonometric analysis or
directly from RoboAnalyzer.
(a) DOF planar robot
2
9
a1=2; a2=2;
a3=1;
phi=pi/3; px = 2.5 + sqrt(3); py = 1 +
sqrt(3)/2;
wx = px - a3*cos(phi);
wy = py-a3*sin(phi); del=wx*wx +
wy*wy;
c2 = (del-a1*a1-a2*a2)/
(2*a1*a2);
s2 = sqrt(1c2*c2);
th21=atan2(s2,c2); th22=atan2(s2,c2);
% Non-zero constant DH
parameters
% Input
% Calculation for
theta2
% Angle in degrees
RoboAnalyzer Output
c11
=
c12
=
3
0
(b) 3R Articulated
> a2=1; a3=1;
>>px=1;py=0;
pz=0;
>>delxy=px*px+py*py; del=delxy+pz*pz;
> th11=atan2(py,px);
> th12=pi+atan2(py,px);
> c3=(del-a2*a2-a3*a3)/(2*a2*a3);
s3=sqrt(1-c3*c3);
> th21d=th21*r2d;
> th22d=th22*r2d;
> th23d=th23*r2d;
3
1
> th24d=th24*r2
d;
> th31d=th31*r2
d;
> th32d=th32*r2
d;
> th11d=th11*r2
d
RoboAnalyzer
Output
3
2
Experiment 6
Aim:To study velocity kinematics using Peter Corke tool box.
Theory:
Velocity Kinematics: Velocity Kinematics relates linear and angular
velocities ofthe end-effector to the joint velocities.
>>
mdl_puma560
>> T0 =
p560.fkine(qn)
T0 =
0.000
-0.0000 0
1.0000 0.5963
1.000
-0.0000 0
-0.0000-0.1501
-0.0000 -0.0000
-1.0000 -0.0144
1.000
0
0
0
0
>>dq = 1e6;
>>Tp = p560.fkine(qn + [dq 0 0
0 0 0])
Tp =
0.000
-0.0000 0
1.0000 0.5963
1.000
-0.0000 0
0.0000 0.1500
-0.0000 -0.0000
-1.0000 -0.0144
1.000
0
0
0
0
>> dTdq1 = (Tp T0) / dq
dTdq1
=
0 1.0000 -0.0000 0.1500
0.000
-0.0000 0
1.0000 0.5963
0
0
0
0
0
0
0
0
>> dRdq1 =
dTdq1(1:3,1:3)
dRdq1
=
0 1.0000 -0.0000
-0.0000 1.0000
0.000
0
0
0
0
>> R =
T0(1:3, 1:3)
R=
-0.0000
0.0000
1.0000
-0.0000
1.0000 -0.0000
3
3
S=
-0.0000 -1.0000 0.0000
1.0000 0.00000.0000
0
0
0
>>
vex(S)
ans
=
0.000
0
0.0000
1.0000
>> J =
p560.jacob0(qn)
J=
0.014
0.1501 4
0.3197
0
0 0
0.000
0.5963 0
0.0000
0
0 0
0.291
0 0.5963 0
0 0
0
0.000
0.000
0 0.0000 0
0.7071 0
1.0000
0.0000 -1.0000 -1.0000 -0.0000 -1.0000 -0.0000
0.000 0.000
-0.7071 0.0000
1.0000 0
0
-0.0000
>>T = transl(1, 0,
% get a transformation matrix with rotation in
0)*troty(pi/2)
Y direction
T=
1.000
0.0000
0 0
1.0000
0 1.0000
0
0
1.0000
0 0.0000 0
1.000
0
0
0
0
>> J =
tr2jac(T)
J=
0.0000
0 -1.0000 0 1.0000
0
0 1.0000
0
0
0 1.0000
0.000
1.0000
0 0
0 0.0000
0
0
0
0 0.0000
0 1.0000
0
0
0
0
0
0 1.0000
0
0 1.0000
0 0.0000
0 1.0000
0
3
4
>>p560.jacob
n(qn)
ans
=
-0.0000 -0.5963
-0.2910
0 0
0
0.5963 0.0000 0.0000
0
0
0
0.1500 0.0144 0.3197
0
0
0
0.707
-1.0000
0
0
1
0
0
- 1.000
-0.0000 1.0000 1.0000 0.0000 0
0.000
0.000
-0.0000 0.0000 0
0.7071 0
>>p560.jacob0(qn,
'eul')
ans
=
0
1.000
0
% get jacobian matrix for Eular
values
4,
alpha=
----------------------------------------------+ +
++
+
+
| j | theta |
d|
a | alpha |
----------------------------------------------+ +
++
+
+
| 1|
q1|
0|
4|
0|
| 2|
q2|
0|
3|
0|
| 3|
q3|
0|
2|
0|
----------------------------------------------+ +
++
+
+
grav 0 base = 1 0 tool = 1 0 0
=
00
0
0
0100
0100
9.81
0010
0010
000
1
0001
>>dr= pi/180
3
5
dr =
0.017
5
3
6
Experiment 7
Aim: To study forward dynamics of a 3DOF of 3R robot.
Theory:
p560.links(1).dyn
3
7
p560.gravi
ty'
ans =
0
0 9.8100
% Torque required to maintain the robot in
position
>>p560.gravload(qn)
ans =
-0.0000 31.6399
6.0351 0.0000
0.0283
p560.gravity= p560.gravity/6;
>>p560.gravload(qn)
ans =
0.00005.2733 1.0059 0.0000 0.0047
p560.base = trotx(pi);
p560.gravload(qn)
ans =
0.0000 -5.2733 -1.0059 -0.0000 -0.0047
>> Q = p560.gravload(qs)
Q = -0.0000 46.0069
% Robot in qs position
8.7722 0.0000
0.0283
>> Q = p560.gravload(qr)
Q=
0 -0.7752
0.2489
={
, ,
}={
, ,
3
8
={
}=
L(1).mdh=1;
}=
% Link1
L(2).mdh=1;
L(3).mdh=1;
two_link_sau = SerialLink(L, 'name', 'two link_sau');
two_link_sau =
twolink_sau (3 axis, RRR, stdDH, slowRNE)
----------------------------------------------+ +
++
+
+
| j | theta |
d|
a | alpha |
----------------------------------------------+ +
++
+
+
| 1|
q1|
0|
4|
0|
| 2|
q2|
0|
3|
0|
| 3|
q3|
0|
2|
0|
----------------------------------------------+ +
++
+
+
grav 0 base = 1 0 tool = 1 0 0
=
00
0
0
0100
0100
9.81
0010
0010
000
1
0001
t0=0; tf=4; N=40; dt=(tf-t0)/N;
>>dt
dt =
0.1000
t=[t0:dt:tf];
dr= pi/180;
Q0 = [-60 90 30]*dr
Q0 =
-1.04721.5708 0.5236
[tf Q Qd] = two_link_sau.fdyn(1, [20;5;1], Q0, [0.5 0 0]) % Do forward
dynamics for 1s & [20;5;1] torque
3
9
4
0
Experiment8
AIM:TostudyPeterCorkeComputerVisiontoolbox
DifferentFunctionsandExplanation:
//SpectralRepresentationofLight//
//definearangeofwavelengths//
>>lambda=[300:10:1000]*1e9;
//inthiscasefrom300to1000nm,andthencomputetheblackbodyspectra//>>for
T=1000:1000:6000
>>plot(lambda*1e9,blackbody(lambda,T));holdall>>end
asshowninFig.8.1a.
//Thefilamentoftungstenlamphasatemperatureof2600Kandglowswhitehot.TheSunhasasurface
temperatureof6500K.ThespectraofthesesourcesarecomparedinFig.8.1b.//
>>lamp=blackbody(lambda,2600);
>>sun=blackbody(lambda,6500);
>>plot(lambda*1e9,[lamp/max(lamp)sun/max(sun)])
//The Suns spectrum at ground level on the Earth has been measured and tabulated//
>>sun_ground=loadspectrum(lambda,'solar.dat');
>>plot(lambda*1e9,sun_ground)
andisshowninFig.8.3a.
4
1
AIM:Absorption
> [A, lambda] = loadspectrum([400:10:700]*1e9,
'water.dat');//Aistheabsorptioncoefficient//
> d=5;//disthepathlength//
> T = 10.^(A*d);// Transmission T is the inverse of absorption//
>>plot(lambda*1e9,T);
whichisplottedinFig.8.3b.
Reflection
//thereflectivityofaredhousebrick//
>>[R,lambda]=loadspectrum([100:10:10000]*1e9,'redbrick.dat');
>>plot(lambda*1e6,R);
4
2
whichisplottedinFig.8.4.Weseethatitreflectsredcolorsmorethanblue.
//TheilluminanceoftheSuninthevisibleregion//>>lambda
=[400:10:700]*1e9;%visiblespectrum>>E=
loadspectrum(lambda,'solar.dat');
//atgroundlevel.Thereflectivityofthebrickis//
> R = loadspectrum(lambda,'redbrick.dat'); //and
thelightreflectedfromthebrickis//
> L=E.*R;
>>plot(lambda*1e9,L);
//whichisshowninFig.8.5.Itisthisspectrumthatisinterpretedbyoureyesasthecolorred.//
Color
//TheluminosityfunctionisprovidedbytheToolbox//
>>human=luminos(lambda);
>>plot(lambda*1e9,human)//and
isshowninFig.8.6a.//
4
3
//Thespectralresponsesoftheconescanbeloadedandshowninfig8.7.b//>>cones=
loadspectrum(lambda,'cones.dat');
>>plot(lambda*1e9,cones)
ReproducingColors
//cmfrgbfunctionreturnthecolormatchingfunctions//
>>lambda=[400:10:700]*1e9;
>>cmf=cmfrgb(lambda);
>>plot(lambda*1e9,cmf);
4
4
>>orange=cmfrgb(600e9)//createthesensationoflightat600nm(orange)//orange=
2.87170.30070.0043
>>green=cmfrgb(500e9)
green=
0.29500.49060.1075
>>w=green(1);
>>white=[www];
>>feasible_green=green+white
feasible_green=
00.78560.4025
>>white
white=
0.29500.29500.2950
//TheToolboxfunctioncmfrgbcanalsocomputetheCIEtristimulusforanarbitraryspectral
response.//
>>RGB_brick=cmfrgb(lambda,L)
RGB_brick=0.61370.14160.0374
ChromaticitySpace
//TheToolboxfunctionlambda2rgcomputesthecolormatchingfunction//
>>[r,g]=lambda2rg([400:700]*1e9);
>>plot(r,g)
>>rg_addticks
>>primaries=cmfrgb([700,546.1,435.8]*1e9);
>>plot(primaries(:,1),primaries(:,2),'d')
4
5
//chromaticityofthespectralgreencolor//
>>green_cc=lambda2rg(500e9);green_cc=
0.97331.6187
>>plot2(green_cc,'s')
>>white_cc=tristim2cc([111])//ofwhitecolor//
white_cc=
0.33330.3333
>>plot2(white_cc,'*')
ColorNames
>>colorname('?burnt')ans
=
'burntsienna''burntumber'
>>colorname('burntsienna')ans
=
0.54120.21180.0588
>>bs=colorname('burntsienna','xy')bs=
0.52580.3840
>>colorname('chocolate','xy')ans
=
0.50920.4026
>>colorname([0.540.200.06])ans
=
burntsienna
>>colorname(bs,'xy')
ans=
'burntsienna'
OtherColorSpaces
//Thefunctioncolorspacecanbeusedtoperformconversionsbetweendifferentcolorspaces.
//
>>colorspace('RGB>HSV',[1,0,0])ans=
4
6
011
>>colorspace('RGB>HSV',[0,1,0])ans=
12011
>>colorspace('RGB>HSV',[0,0,1])ans=
24011
//thesaturationis1,thecolorsarepure,andtheintensityis1//
>>colorspace('RGB>HSV',[0,0.5,0])ans=
120.00001.00000.5000
//intensitydropsbuthueandsaturationareunchanged//
//ImageFormation//
cam=CentralCamera('focal',0.015); //creatingamodelofacentralperspectivecamera//
>>P=[0.3,0.4,3.0]';
>>cam.project(P);
ans=
0.0015
0.0020
cam.project(P,'Tcam',transl(0.5,0,0))
ans=
0.0040
0.0020
//movingcamera0.5mtotheleft//
cam=CentralCamera('focal',0.015,'pixel',10e6,...
//
'resolution',[12801024],'centre',[640512],'name','mycamera')name:
mycamera[centralperspective]
focallength:0.015
pixelsize:(1e05,1e05)
principalpt:(640,512)number
pixels:1280x1024
4
7
T:
1000
0100
0010
0001
>>cam.project(P);
ans=
790
712
>>cam.C
ans=
1.0e+03*
1.5000
0
0
0.6400
1.5000
0
0.5120
0.0010
0
0
>>cam.fov()*180/pi
//fieldofview//
ans=
46.2127 37.6930
//wecreatea33gridofpointsinthexyplanewithoverallsidelength0.2mandcentredat(0,0,1),returns3*9matrix//
>>P=mkgrid(3,0.2,'T',transl(0,0,1.0));
>>P(:,1:4);
ans=
0.1000
0.10000.1000
0.1000
1.0000
0
1.0000
>>cam.project(P)
0.1000
1.0000
0
0.1000
1.0000
//Theimageplanecoordinatesoftheverticesare//
ans=
490 490
362
662
512
362
512 662
362
512
662
>>cam.plot(P)h
=
173.0022
4
8
>>Tcam=transl(1,0,0.5)*troty(0.9);//obliqueviewoftheplane//>>cam.plot(P,
'Tcam',Tcam)
>>cam.project([1000]','Tcam',Tcam)ans=
1.0e+03*
1.8303
0.5120
>>p=cam.plot(P,'Tcam',Tcam)
>>p(:,1:4)
//obliqueviewingcasetheimageplanecoordinates //
ans=
512.0000
659.6670
374.9050
>>cube=mkcube(0.2,'T',transl([0,0,1]));//projectionofcube //
>>cam.plot(cube);
>>[X,Y,Z]=mkcube(0.2,'T',transl([0,0,1.0]),'edge');
>>cam.mesh(X,Y,Z)
>>cam.T=transl(1,0,0.5)*troty(0.8);//obliqueview//>>cam.mesh(X,
Y,Z,'Tcam',Tcam);
4
9
/ll////CameraCalibration//
//lHomogeneousTransformationApproach//
>>P=mkcube(0.2);
>>T_unknown=transl(0.1,0.2,1.5)*rpy2tr(0.1,0.2,0.3);
>>cam=CentralCamera('focal',0.015,...'pixel',10e6,'resolution',[12801024],'centre',[512512],...'noise',0.05);
> p=cam.project(P,'Tobj',T_unknown)
> C=camcald(P,p)
maxmresidual0.067393pixels.
C=
883.1620240.0720531.4419612.0432
0.0985 0.6494
712.0180
1.0000
name:invcamcal[centralperspective]focal
length:1504
pixelsize:(1,0.9985)
principalpt:(518.6,505)
Tcam:
0.936950.290370.194460.08339
0.312330.945390.0932080.39143
0.156770.148070.976471.4671
0
>>est.f/est.rho(1)
//ratioofestimatedparametersofcamera//
ans=
1.5044e+03
>>cam.f/cam.rho(2)
//ratiooftrueparametersofcamera//
ans=
1.500e+03
>>T_unknown*est.T
//relativeposebetweenthetrueandestimatedcamerapose//
ans=
0.7557 0.51630.40310.0000
0.60370.78770.12270.0001
0.25410.33610.90690.0041
0
0
0
1.0000
5
0
>>plot_sphere(P,0.03,'r')
>>plot_frame(eye(4,4),'frame','T','color','b','length',0.3)
>>est.plot_camera()
//PoseEstimation//
>>cam=CentralCamera('focal',0.015,'pixel',10e6,...'resolution',[12801024],'centre',[640512]);//createa
calibratedcamerawithknownparameters//
>>P=mkcube(0.2);
//cubedetermination//
>>T_unknown=transl(0,0,2)*trotx(0.1)*troty(0.2)//poseestimationw.r.tcamera//T_unknown=
0.9801
0.0198
0.1977
0
0.9950
0.0998
0
0.1987
0.0978
0.9752
0
0
0
2.0000
1.0000
>>p=cam.project(P,'Tobj',T_unknown);
>>T_est=cam.estpose(P,p)
T_est=
0.9801
0.0000
0.19870.0000
0.0198
0.1977
0
0.9950
0.0998
0
0.0978 0.0000
0.9752
2.0000
0
1.0000
//estimaterelativeposeoftheobject//
//lNonPerspectiveImagingModels//
>>cam=FishEyeCamera('name','fisheye',...'projection','equiangular',...'pixel',10e6,...'resolution',[12801024]);//
creatingfisherycamera//
>>[X,Y,Z]=mkcube(0.2,'centre',[0.2,0,0.3],'edge');//creatingedgebasedmodel//
>>cam.mesh(X,Y,Z)
5
1
>>[X,Y,Z]=mkcube(1,'centre',[1,1,0.8],'edge');
>>cam.mesh(X,Y,Z)
//sphericalcamera//
>>cam=SphericalCamera('name','spherical')
>>[X,Y,Z]=mkcube(1,'centre',[2,3,1],'edge');
>>cam.mesh(X,Y,Z)
u0=528.1214;v0=384.0784;//cameracalibrationparameters//
l=2.7899;
m=996.4617;
5
2
> [Ui,Vi]=imeshgrid(fisheye);//domainofinputimage//
> n=500;
>>theta_range=(0:n)/n*pi/2+pi/2;
>>phi_range=(n:2:n)/n*pi;
>
[Phi,Theta]=meshgrid(phi_range,theta_range);//nisnoofgridcells//
> r=(l+m)*sin(Theta)./(l+cos(Theta));
>>U=r.*cos(Phi)+u0;//Cartesiancoordinatesintheinputimage//>>V=
r.*sin(Phi)+v0;
>>spherical=interp2(Ui,Vi,fisheye,U,V);//applyingwrap//
>>idisp(spherical)
//lSyntheticPerspectiveImages//
>>W=1000;
>>m=W/2/tan(45/2*pi/180)//45fiedofview//m=
1.2071e+03
>
l=0;
> u0=W/2;v0=W/2;
>>[Uo,Vo]=meshgrid(0:W1,0:W1);//Thedomainoftheoutputimage//>>r=
sqrt((Uou0).^2+(Vov0).^2);//polarcoordinates//
>>phi=atan2((Vov0),(Uou0));>>Phi_o=
phi;//sphericalcoordinates//
>>Theta_o=piacos(((l+m)*sqrt(r.^2*(1l^2)+(l+m)^2)l*r.^2)./(r.^2+(l+m)^2));
>>idisp(perspective)
>>spherical=sphere_rotate(spherical,troty(1.2)*trotz(1.5));
5
3
1Obtainganimagefromafile
Thetoolboxconsistofireadfunctiontoreadanimage
>>street=iread('street.png'); %Whichreturnsamatrix
>>about(street)
%Givesdetailsrelatedtostreet
street[uint8]:851x1280(1089280bytes)
>>street(200,300) %Thepixelatimagecoordinate(300,200)
ans=
42
>>street_d=idouble(street);%doubleprecisionnumberintherange[0,1].
>>about(street_d)
>>street_d[double]:851x1280(8714240bytes)
>>street_d=iread('street.png','double');%specifythisasanoptionwhile
loading
>>idisp(street);
%todisplayanimageisidisp
5
4
>>flowers=iread('flowers8.png');
>>about(flowers)
flowers[uint8]:426x640x3(817920bytes)
>>pix=flowers(276,318,:)%pixelat(318,276)
ans(:,:,1)=
57ans(:,:,2)
=91
ans(:,:,3)=
198
%Thepixelvalueis
5
5
>>idisp(flowers(:,:,1))
2ImagesfromanAttachedCamera
Mostlaptopcomputerstodayhaveabuiltincameraforvideoconferencing.Forcomputers
withoutabuiltincameraanexternalcameracanbeeasilyattachedviaaUSB.Themeans
ofaccessingacameraisoperatingsystemspecific
andtheToolboxprovidesasimpleinterfacetoacameraforMacOS,LinuxandWindows.
% A list of all attached cameras and their capability can be obtained by
>>VideoCamera('?')
% Weopenaparticularcamera
>>cam=VideoCamera('name')
% which returns an instance of a VideoCamera object that is a subclass of the
%ImageSourceclass.
% Thedimensionsoftheimagereturnedbythecameraaregivenbythesizemethod
>>cam.size()
% animageisobtainedusingthegrabmethod
5
6
>>im=cam.grab();
%whichwaitsuntilthenextframebecomesavailable.
3ImagesfromaMovieFile
TheToolboxsupportsreadingframesfromamoviefilestoredinanyofthepopular
formatssuchasAVI,MPEGandMPEG4.Forexamplewecanopenamoviefile
>>cam=Movie('traffic_sequence.mpg');720
x576@2.999970e+01fps
350frames
%Thismoviehas350framesandwascapturedat30framespersecond.
%Thesizeofeachframewithinthemovieis
>>cam.size()
ans=
720576
% andthenextframeisreadfromthemoviefileby>>im
=cam.grab();
>>about(im)
im[uint8]:576x720x3(1244160bytes)
% whichisa720576colorimage.
% Withthesefewprimitiveswecanwriteaverysimplemovieplayer1while
1
2im=cam.grab;
3ifisempty(im),break;end
4image(im)
5end
%wherethetestatline3istodetecttheendoffile,inwhichcasegrabreturnsanempty
matrix.
4ImagesfromCode
%TheToolboxfunctiontestpatterngeneratessimpleimageswithavarietyofpatterns
includinglines,gridsofdotsorsquares,intensityrampsandintensitysinusoids.
>>im=testpattern('rampx',256,2);
>>im=testpattern('siny',256,2);
>>im=testpattern('squares',256,50,25);
5
7
>>im=testpattern('dots',256,256,100);
Wecanalsoconstructanimagefromsimplegraphicalprimitives.Firstwecreatea
blankcanvascontainingallblackpixels(pixelvalueofzero)
>>canvas=zeros(200,200);
%thenwecreatetwosquares
> sq1=0.5*ones(40,40);
> sq2=0.9*ones(20,20);
Thefirsthaspixelvaluesof0.5(mediumgrey)andis4040.Thesecondissmaller
(just2020)butbrighterwithpixelvaluesof0.9.Nowwecanpastetheseontothe
canvas
>>canvas=ipaste(canvas,sq1,[20,40]);
>>canvas=ipaste(canvas,sq2,[60,120]);
>>circle=0.6*kcircle(30);
%ofradius30pixelswithagreyvalueof0.6.
>>size(circle)
ans=
6161
5
8
>>canvas=ipaste(canvas,circle,[100,30]);%Finally,we
drawalinesegmentontoourcanvas>>canvas=
iline(canvas,[30,40],[150,190],0.8);
%whichextendsfrom(30,40)to(150,190)anditspixelsareallsetto0.8.
>>idisp(canvas)
5MonadicOperations
MonadicimageprocessingoperationsresultinanimageofthesamesizeWHasthe
inputimage,andeachoutputpixelisafunctionofthecorrespondinginputpixel.
SinceanimageisrepresentedbyamatrixanyMATLABelementwisematrixfunction
oroperatorcanbeapplied,forexamplescalarmultiplicationoraddition,orfunctionssuch
absorsqrt.
Thedatatypeofeachpixelcanbechanged,forexamplefromuint8(integerpixels
intherange[0,255])todoubleprecisionvaluesintherange[0,1]>>imd=
idouble(im);
>>im=iint(imd);
Acolorimagehas3dimensionswhichwecanalsoconsiderasa2dimensionalimage
whereeachpixelvalueisa3vector.
5
9
Amonadicoperationcanconvertacolorimagetoagreyscaleimagewhereeachoutput
pixelvalueisascalarrepresentingtheluminanceofthecorrespondinginputpixel
>>grey=imono(flowers);
Theinverseoperationis
>>color=icolor(grey);
%thehistogramofthestreetsceneiscomputedanddisplayedby
>>ihist(street)
>>[n,v]=ihist(street);
%wheretheelementsofnarethenumberoftimespixelsoccurwiththevalueof%the
correspondingelementofv.
>>ihist(street,'cdf');
6
0
%TheToolboxfunctionpeakwillautomaticallyfindthepeaks
>>peak(n,v)'
ans=
840435119717602675213
8892218176153108111147113119
121126130138230244
Wecanidentifythepixelsin
thispeakbyalogicalmonadicoperation
>>shadows=(street>=30)&(street<=80);
>>idisp(shadows)
>>im=istretch(street);
%ensuresthatpixelvaluesspanthefullrange_whichiseither[0,1]or[0,255]
%Amoresophisticatedversionishistogramnormalizationorhistogram
equalization
>>im=inormhist(street);
%Suchimagescanbegammadecodedbyanonlinearmonadicoperation>>im=
igamma(street,1/0.45);
whichraiseseachpixeltothespecifiedpower,or>>im
=igamma(street,'sRGB');
>>idisp(street/64)
6
1
6
2
Experiment 9
AIM: To study Object tracking using Kalman filter using matlab.
Theory:
The Kalman filter keeps track of thee estimated state of the system and the
variance or uncertainty of
the estimate. The estimate is updated using a state transition model and measure
ments.
denotes the estimate of the syste m's state at time step k before the
k-th measurem ent zk has been
The Kalman filter, also known as linnear quadratic estimation (LQE), is an algorithm
tha t uses a series of measurements observed over time, containing noise (random
variations) and othe r inaccuracies, and produces estimates of unknow n variables
that tend to be more precise than tho se based on a single measurement alone.
More f ormally, the Kalman filter operates recursively on streams of noisy input data
to produce a statistically optimal estimate of the underlying system state.
The algorithm works in a two-st ep process. In the prediction step, the Kalman
filter produces estimates of the current state variables, along with their
uncertainties. Once the outcome of the next measurement (necessarily
corrupted with some amount of error, including ra ndom noise) is observed,
these estimates are updated using a weighted average, with more weight being
given to estimates with higher certainty. Because of the algorithm's recursive
nature, it can run in real time using only the present inpu t measurements and
the previously calculated state and its uncertainty matrix; no additional p ast
information is required.
CODE:
6
3
% object
tracking
using
frame
len
vidObject.Height;
wid = vidObject.Width;
sumFrame = zeros(len,
wid);
(dt^3/2)*(prcNoise(2)*prcNoise(4));
(dt^3/2)*(prcNoise(1)*prcNoise(3))...
0 (dt^2)*(prcNoise(3)^2) 0; 0
(dt^3/2)*(prcNoise(2)*prcNoise(4))...
0 (dt^2)*(prcNoise(4)^2)];
% initialize state model covariance equal to state noise
variance for the first
6
4
% iteration only
stateCov =
prcNoiseVar;
statePredCov = zeros(4,4);% state prediction
covariance obsNoise = [0.01; 0.01]; % noise
in observation model
vector
consist
of
position
and
for k = 1 : nFrames
%
video frame =
read(vidObject,k);
grayFrame =
rgb2gray(frame);
background
sum
of
obtained
frames
by
averaging
background
(sumFrame ./ k);
object = false(len,wid);
% binary
image
background
consist
subtraction
detected
object
obtained
object(abs(double(grayFrame)
by
-
background)>10) = 1;
% morphological operation to remove clutter from binary
image
object = imopen(object,strel('square',3));
6
5
object =
imclose(object,strel('square',12));
object = imfill(object,'holes');
object);
=
if(centroid
~= 0)
cnd = cnd
+ 1; end
%
kalman algorithm
if(cnd == 1)
%
X(1) = centroid(1);
X(2) =
centroid(2);
elseif(cnd> 1)
if(centroid ~= 0)
if(absent==1)
X(1:2)=centroid(
1:2); end
end
%
state
model
prediction
covariance statePredCov =
A*stateCov*A' + prcNoiseVar;
%
prediction
of
observation
model
6
6
z(1:2) =
centroid(1:2);
zError = z zPred;
state
estimation
bbox(3:4);
box(3,3:4)
bbox(3:4); else
box(1,1:2) = z(1:2);
bboxPred(1:2) =
XPred(1:2);
bboxEst(1:2) =
XEst(1:2); end
box(2,1:2) =
bboxPred(1:2);
box(3,1:2) =
bboxEst(1:2); end
subplot(3,1,1);
% blue for detected position of object, red for predicted position
% of object, green for estimated position of object
6
7
imshow(uint8(background));
title('background');
subplot(3,1,3);
imshow(object);
title('binary image of detected
objects');
pause(0.001);
end
close all
OUTPUT:
6
8
Experiment 10
AIM: To study Object Tracking using Optical Flow.
THEORY:
In optical flow, motion is estimated as instantaneous image velocities or
discrete image displacements. The method calculates the motion between two
video sequences that are taken at two time instances t and t+t at every pixel
position. It is based on local taylor series approxi mation of image signal i.e. they
used partial derivatives with respect to spatial and temporal changes.
x +
y +
z + Higher order
=0
+
+ =0
Where and are the x and y components of velocity or optical flow of
I(x, y, t) and ,
,
=y
The above equation is solved to determine the optical flow vectors in two
directions x and y.
Fig. 1 show the simulink model to track a moving object in a given video. The
video used in this model is viptraffic.avi consisting of moving cars on a road
scenario.
1. The initial block is the scource block, which is used to read a multimedia file.
6
9
7
0
7
1
OUTPUT:
7
2
(a)
(b)
(c)
(d)
(e)
Fig.5 (a) Nth Frame of the video (b)N+1th Frame of the video (c) Optical flow
velocity vectors for frame (a) and (b) , (d) Thresholding for object detection and
(e) Object detection
7
3