Вы находитесь на странице: 1из 118

NAME: ANADI ANANT JAIN

ROLL NO: 2K12/EC/024


REGN. NO: DTU/12/1224
GROUP-D2

INDE
X
S.n
o

Page

List of Experiments

No.

To study various Spatial Descriptions and Transformations


using Peter Corke Toolbox in MATLAB.

To study the Forward kinematics of a planar 3 degree of


freedom RRR robot using Peter Corke toolbox

10

To study the Inverse Kinematics of a planar 3 degree of


freedom RRR robot using Peter Corke toolbox

18

To perform the Forward Kinematics of a 3 DOF planar


robot, Stanford manipulator, PUMA robot for the given DH
parameters using
Roboanalyser and calculate the end effector position for a
given set of joint values using MATLAB and verify the
results with roboanalyser output.
To perform the Inverse Kinematics of a 3 DOF planar robot
for the given
DH parameter using roboanalyser software and verify the
solutions from the MATLAB program.

23

29

33

To study velocity kinematics for planar 3R robot

To study the Forward Dynamics of 3 DOF 3R robot using


Peter Corke toolbox.

37

To study Computer Vision toolbox

41

To study Kalman Filter using MATLAB

63

10

To study Object Tracking using Optical Flow.

69

Attendance(Outof10classes)

InternalAssessmentMarks

Date

Instructors Signature

Experiment 1
Aim: To study various spatial descriptions and transformations
using Pete r Corke toolbox in Matlab.
>>T1 = se2(1, 2, 30*pi/180)
%It represents a translation of (1, 2) and a rotation of 30
w.r.t world frame T1 =
0.8660 -0.5000 1.0000
0.50000.8660 2.0000
0
0 1.0000
>>trplot2(T1, 'frame', '1', 'color', 'b', 'axis', [0 5 0 5]);
%This option specify that the label for the frame is {1} and it is colored
blue

>>hold on;
>>T2 =
se2(2, 1, 0)
T2 =
1 0 2
0 1 1
0 0 1
>>trplot2(T2, 'frame', '2', 'color', 'r', 'axis', [0 5 0 5]);
>>hold on;
%Now displacement of (2, 1) has b een applied with
respect to frame {1} >>T3 = T1*T2
T3 =
0.866
- 2.232
0
0.5000
1
0.500 0.866 3.866
0
0
0
0
0 1.0000
>>trplot2(T3, 'frame', '3', 'color', 'g',
'axis', [0 5 0 5]); >>hold on;
%The non-commutativity of composition is
shown below >>T4 = T2*T1
T4 =

0.8660 -0.5000 3.0000


0.50000.8660 3.0000
0
0 1.0000
>>trplot2(T4, 'frame', '4', 'color', 'c', 'axis', [0 5 0 5]);

(a) (i)Using a ZYZ, alpha, beta, gam ma, Euler angle convention, writes a Matlab
progra m the rotation matrix A RB when the user enters Euler angles alpha, beta,
and gamma. E.g. = 10, = 20, = 30. Also demonstrate the inverse prop erty of
rotational matrix which is B R A A RB 1 = ( A R B ).

Code:
>> dr = pi/180;
ang = input('enter alpha, beta and gamma in degrees');
alpha = ang(1)*dr; %Converting angle in degree to radian
beta = ang(2)*dr;
gamma = ang(3)*dr;
Tz = rotz(alpha); % Rotate a bout z-axis by angle alpha
Ty = roty(beta);
% Rotate a bout current y-axis by angle beta
Tx = rotz(gamma); %Rotate about current z-axis by angle gamma
Rmult = Tz*Ty*Tx; %Resultan t Rotational matrix
Trpy = eul2tr(alpha,beta,gamma); %Changing euler angle to
translational matrix enter alpha, beta and gamma in degrees[10 20
30]
>> Tz
4

Tz =
0.9848 -0.1736 0
0.17360.9848 0
0
0 1.0000
> Tx
Tx =
0.866
0
0.5000
0.500
0
0.8660
0

0
0
1.000
0

>> Ty
Ty =
0.9397
0
0.342
0
>>
Rmult
Rmult
=

0 0.3420
1.000
0 0
0

0.939
7

0.7146 0.6131
0.771
0.6337 3
0.296
2
0.1710
>>
Trpy
Trpy
=
0.7146 0.6131
0.771
0.6337 3
0.296
2
0.1710
0
0
0
>>
R1=inv(Rmult
)
R1 =
0.633
0.7146 7
0.7713
0.613

0.3368
0.0594
0.9397

0.3368

0.0594

0.9397
1.0000

0
%

RB

matrix
0.2962
0.1710

Inverse Rotational

1
0.059
0.3368 4
0.9397
>>
R2=Rmult.'
R2 =
0.633
0.7146 7
0.2962
0.613
1
0.7713 0.1710
0.059
0.3368 4
0.9397

% ( ARB )' Inverse Rotational


matrix

ii)Write a Matlab program to calculate the Euler angles , , when


the user enters the rotation matrix i.e. A RB .
>> ang = tr2eul(Trpy)

% Changing translational matrix to euler angle


5

ang =
0.17450.3491 0.5236
>>
ang/dr
ans =

10.0000 20.0000 30.0000

iii)For a simple rotation of angle =20 and

P = [1 0 1], calculate

P.

>> beta1 = input('enter beta in


degree'); Pb = input('enter Pb
column wise');

Ty = roty(beta1*dr); % Rotate about current y-axis by


angle beta1 Pa = Ty*(Pb)
enter beta in degree20
enter Pb column wise[1
0 1]'

Pa =
1.2817
0
0.5977
A
(b)(i) Write a Matlab program to calculate the homogenous
transformation matrix TB when the user
A
enters ZYX Euler angles , , and the position vector PB . Test with case =10, =20, =30 and
A
PB = [1 2 3].

>> DR=pi/180;
angs=input('Enter alpha beta gamma');
alpha=angs(1)*DR; %Converting angle in degree to
radian beta=angs(2)*DR;
gamma=angs(3)*DR;
Pba=input('enter Pba'); %Enter A PB
Enter alpha beta gamma[10 20 30]
enter Pba[1 2 3]'
>> Trpy=rpy2tr(gamma,beta,alpha);
%Changing roll, pitch and yaw angles to
translational matrix
Ttrn=transl(Pba);
% Translational matrix of A PB
Tba=Ttrn*Trpy;
% ATB corresponding to new rotated frame Trpy
>> Pba
Pba =
1
2
3
>> Trpy
Trpy =
0.9254 -0.1632

0.3420 0
6

0.31880.8232 0.4698
0.20490.5438 0.8138
0
00 1.0000
> Tba
Tba =

0
0

0.925
1.000
4
0.1632 0.3420
0
0.318 0.823
- 2.000
8
2
0.4698
0
3.000
0.2049 0.5438 0.8138
0
0
0
0 1.0000

A
B
A
(ii)For =20 (=0, =0), PBis [3 0 1] and P= [1 0 1]. Use Matlab to calculate P.
Pb = input('enter Pb column wise');
Pa =
%Calculating A P using ATB and B P
Tba*[Pb;1];
Ttrn1 = transl(3,0,1)
Tba1 =
rpy2tr(0,20*DR,0);
Tba2 = Ttrn1*Tba1
enter Pb column wise[1 0 1]'
Ttrn1
=
1 0 0 3
0 1 0 0
0 0 1 1
0 0 0 1

Tba2
=
0.342
0.9397
0
0 3.0000
1.000
0
0
0 0
-0.3420
0 0.9397 1.0000
0
00 1.0000

(iii)

Write a Matlab program to calculate the inverse homogenous transformation


B
A
1
matrix i.e. TA TB .

Also show
that

AT

and AT

1
B

= I

>>
Tba_inv=inv(Tba
);
eye(4)Tba*Tba_inv
eye(4)Tba_inv*Tba
7

ans =
1.0e-15
*
0

0 0.0555 0.1110

0.0278
0
0
0
0.0555 0.1110
0
0
0
0
0
0
ans =
1.0e-15
*
0
0.111
-0.0555 0.0833
0
0.0139
0 0.0555
0
0.0833 0.0555
0
0
0
0
0
0

C
A
A
(c)Write a matlab program
to calculate TC and TA given the fact that TB and
A
B
from part a. Define TB to be the result from part (a) and TC from part b.

> angs2=input('enter alpha2, beta2,


gamma2'); alpha2=angs2(1)*DR;
beta2=angs2(2)*DR;
gamma2=angs2(3)*DR;
Pcb=input('enter Pcb value
columnwise'); Trpy2 =
rpy2tr(gamma2, beta2,alpha2);
Ttrn2= transl(Pcb);
Tcb = Ttrn2 *
Trpy2; Tca = Tba *
Tcb; Tac=inv(Tcb)*
inv(Tba);
Tac2=inv(Tca);
Tba2=Tca*inv(Tcb);
Tcb2=inv(Tba)*
Tca;
enter
alpha2,
beta2,
gamma2[10 20 30] enter Pcb
value columnwise[1 2 3]'
> Tca
Tca =
0.7343 -0.0993

0.6715

2.6251

TC are obtained

0.6537 0.3701 -0.6601


2.5556 -0.1829 0.9237
0.3367 6.3242
0
00 1.0000
> Tac
Tac =
8

0.7343
0.6537
-0.1829
-2.4413 -0.0993 0.3701
0.9237
-6.5265
0.6715
-0.6601 0.3367 -2.2051
0
00 1.0000
> Tac2
Tac2 =
0.7343
0.6537
-0.1829
-2.4413 -0.0993 0.3701
0.9237
-6.5265
0.6715
-0.6601 0.3367 -2.2051
0
00 1.0000
> Tcb2
Tcb2 =
0.9254 -0.1632
0.3420
1.0000
0.3188 0.8232 -0.4698
2.0000 -0.2049 0.5438
0.8138 3.0000
0
00 1.0000
> Tba2
Tba2 =
0.9254 -0.1632
0.3420
1.0000
0.3188 0.8232 -0.4698
2.0000 -0.2049 0.5438
0.8138 3.0000
0
0
0 1.0000

Experiment 2
Aim: Write down a Matlab program for the planar 3-Degree of freedom RRR
robot with the
0
T3, with
following parameters L1 = 4m, L2=3m, L3 = 2m.To derive the
the
forward kinematics i.e.
following input cases:
T

= [0 0 0]
T
= [10 20 30]

= [90

90

90

1
0

1
1

>> L.A(0.5)
ans =
0.8776 -0.0000 0.4794 0.1755
0.000
0.4794
0
-0.8776 0.0959
1.0000
0
0.0000 0.1000
0
0
0 1.0000
>>
L.RP
ans =
R
>> L.a
ans =
0.2000
>> L.offset =
0.5 ;
>>
L.A(0)

% obtain transformation matrix for theta = 0.5


radian

% gets type of joint, R represent Revolute


joint

% gets link length

% specify theta in advance i.e. 0.5 rad

% get transformation matrix now


% produces same matrix as above as link offset was
ans =
specified initiaily
0.8776 -0.0000 0.4794 0.1755
0.000
0.4794
0
-0.8776 0.0959
1.0000
0
0.0000 0.1000
0
0
0 1.0000
L(1) = Link([0 0 1
0])
% create L(1) Link object
L=
0,
theta=q, d=
a=
1, alpha= 0 (R,stdDH)
>> L(2) = Link([0 0
1 0])
% create L(2) Link object
L=
theta=q1,
1,
d= 0, a=
alpha=
0 (R,stdDH)
theta=q2,
1,
d= 0, a=
alpha=
0 (R,stdDH)
>> two_link = SerialLink(L, 'name',
% Serial-link robot class represents a serial'two link');
link armtype robot.
two_link =
two link (2 axis, RR, stdDH,
slowRNE)
+-------+----+
+----------+----------alpha
| j | theta |
d|
a| |
+-----------

+--+-----------

| 1|
| 2|
+--+-----------

grav =
0
9.81

+---------+
+----------q1| 0|
1|
0|
q2| 0|
1|
0|
+---------+
+----------0 base = 1 0 0 0 tool = 1
000
0100
0100
0010
0010
000
1
0001
+-----------

+-----------

>>
mdl_twolink
twolink
=
two link (2 axis, RR, stdDH,
slowRNE) from Spong,
Hutchinson, Vidyasagar;

% grav specifies that gravity will be applied in


z direction
% base specifies the initial position of robot
% tool specifies the end effector direction

% script to directly produce a two link robot

1
2

+ ---+----------+
+ ----------| j | theta |
d | a | alpha |
++--------- +------------+ ----------| 1|
q1|
0| 1|
0|
| 2|
q2|
0| 1|
0|
++--------- +------------+ ----------grav =
0 base = 1 0 0 0 tool =
010
0
0
0100
001
9.81
0
0010
0001
0001
>>
twolink.n
ans =
-------- ---+-------- --

+----------

+----------

+
1000

% get number of links in


robot

>> links =
% get link information
twolink.links links
=
1, alpha= 0 (R,stdDH)
0,
theta=q1,
a=
1, alpha= 0 (R,stdDH)
0,
d=
a=
theta=q2,
d=
>> twolink.fkine([0 0]) % fkine gives the pose (4x4) of the robot endeffector as a homogeneous transformation
ans
=
1 0 02
0 1 00
0 0 10
0 0 0 1
> twolink.fkine([p
i/4 - pi/4]) ans =
1.000
1.707
0
0 0 1
1.000
0.707
0
0 0 1
1.000
0
0
0 0
00
0
1.0000
% plot displays the robot
orientation in 3D >> twolink.plot([0
0]) % for theta = [0 0]

1
3

>> twolink.plot([pi/4 pi/4])

% robot with one revolute and one


prismatic joint L(1) = Link([0 0 1 0 0]);
L(2) = Link([0 0 1 0 1]);
Linking = SerialLink(L, name,
myrobot); Linking.plot([0 1]);

mdl_puma5
60 >>
p560
p560 =
Puma 560 (6 axis, RRRRRR, stdDH,
slowRNE)
viscous friction; params of
8/95;
----------+ ---+
+
+
|j|
theta | d | a | alpha |
+- +-------- +--------- ----+
| 1|
q1|
0| 0|
1.571|
0|
| 2|
q2|
0.4318|
0|
-------- ---+-------- --

+-----------

+----------

+-----------

0.15|

| 3|
q3|
0.0203|
-1.571|
0.431
| 4|
q4|
0| 1.571|
8|
| 5|
q5|
0| 0| -1.571|
| 6|
q6|
0| 0|
0|
+- +-------- +--------- ----+
grav
0 base = 1 0 0 0 tool = 1
+----------

+-----------

=
0
9.81

0100
0010
0001

000
0100
0010
0001
1
4

>>
p560.plot(qz)

>>
p560.plot(qr)

>>
p560.plot(qs)

>>
p560.plot(qn
)

1
5

Code:
>> B(1) = Link([0 0
4 0])
B=
0,
theta=q, d=
a=
>> B(2) = Link([0 0
3 0])
B=
theta=q1,
d=

0
0, alpha= (R,stdDH)

0,
a= 0, alpha=

theta=q2,
0,
d=
a= 4, alpha=
>> B(3) = Link([0 0
2 0])
B=
theta=q1,
d=

0,
a= 4, alpha=

theta=q2,
d=

0,
a= 3, alpha=

0
(R,stdDH
)
0
(R,stdDH
)

0
(R,stdDH
)
0
(R,stdDH
)
0
(R,stdDH
)

theta=q3,
0,
d=
a= 2, alpha=
>> SKM = SerialLink(B, 'name',
'myBot')
SKM =
Tinku (3 axis, RRR, stdDH,
slowRNE)
+---------+--- +----------+
-+
|j|
theta |
d | a | alpha |
+---------+--------+--------+--- +-----------+
--| 1|
q1|
0| 4|
0|
| 2|
q2|
0| 3|
0|
| 3|
q3|
0| 2|
0|
+---------+--------+--------+--- +-----------+
--grav 0 base = 1 0 0 0 tool = 1 0 0
=
0
0
0100
0100
001
9.81
0010
0
0001
0001
> dr = pi/180;
> q1 = [0 0 0]*dr
q1 =
0 00
----------+-----------

> q2 = [10 20 30]*dr


q2 =
0.1745 0.3491 0.5236
> q3 = [90 90 90]*dr
q3 =
1.570
1.57081.5708
8
T30_1 =
SKM.fkine(q1)
T30_1
=
1 0 0 9
0 1 0 0
0 0 1 0
0 0 0 1
>> T30_2 = SKM.fkine(q2)
T30_2
=
0.5000 -0.8660 0 7.5373
1
6

0.866
0
0 0.5000
3.9266
0 0 1.0000
0
0
0 0 1.0000
> T30_3 = SKM.fkine(q3)
T30_3 =
0.000
3.000
0
1.0000
0
0
1.000
2.000
0
-0.0000 0
0
0
0 1.0000
0
0
0
0 1.0000
SKM.plot([0 0 0], 'workspace', [-5 5 -5 5 -5 5])

SKM.plot([10*dr, 20*dr, 30*dr], 'workspace', [-5 5


-5 5 -5 5])

SKM.plot([90*dr, 90*dr, 90*dr], 'workspace', [-5 5


-5 5 -5 5])

1
7

Experiment 3
Aim: To study the Inverse Kinematics of a planar 3 degree of
freedom RRR robot using Peter Corke toolbox
Theory:
We have shown how to determine the pose of the end-effector given
the joint coordinates and optional tool and base transforms. A
problem of real practical interest is the inverse problem: given the
desired pose of the end -effector E what are the required joint
coordinates? For example, if we know the Cartesian pose of an object,
what joint coordinates does the robot need in order to reach it? This is
the inverse kinematics problem which is written in functional form as
In general this function is not unique and for some classes of
manipulator no closed form solution exists, necessitating a numerical
solution.
> mdl_puma560
> qn
qn =
0 0.7854 3.1416

0 0.7854

>> T = p560.fkine(qn) % fkine gives the pose (4x4) of the robot endeffector as a homogeneous transformation
T=
-0.0000
0.0000
1.0000
0.5963
-0.0000
1.0000
-0.0000 -0.1501 -1.0000
-0.0000 -0.0000 -0.0144
0
00 1.0000
> qi = p560.ikine6s(T) % ikine6s gives the joint coordinates corresponding to the
robot end-effector

with pose T
qi =
-0.00000.7854 3.1416 3.1416 -0.7854 -3.1416
>> qi =
p560.ikine6s(T)
qi =
-0.00000.7854 3.1416 3.1416 -0.7854 -3.1416
>>
qz
qz
=

0 0 0 0 0 0
>> T = p560.fkine(qz)

T=
1.0000

0 0.4521
1
8

0.150
0 1.0000
0
0
0.431
0
01.0000
8
0
00 1.0000
> qi = p560.ikine6s(T)
qi =
0 1.5249 -3.0476 -3.1416 -1.5228 -3.1416
> qz=[pi/4,0, pi/2, pi/4,0,0]
qz
=
0.7854
0 1.5708 0.7854

>> T = p560.fkine(qz)
T=
-0.5000 -0.5000 -0.7071
0.1061
0.5000
0.5000
-0.7071 -0.1061
0.7071 -0.7071
0.0000
0
00 1.0000

0.0203

> qi = p560.ikine6s(T)
Warning: point not
reachable > In
SerialLink.ikine6s at 201

qi =
NaN NaN NaN NaN NaN NaN
>> qi = p560.ikine6s(T, 'ru') % gives the original set of joint angles by
specifying a right handed configuration with the elbow up
qi =
2.6486 -2.3081 3.1416 -2.4673 -0.8604
-0.4805 >> p560.plot(qi)

>> p560.ikine6s( transl(1, 0, 0) ) % Due to mechanical limits on joint angles and possible
collisions between links not all eight solutions are physically achievable; no solution can be achieved

Warning: point not


reachable > In
SerialLink.ikine6s at 201
ans =

NaN NaN NaN NaN NaN NaN


>> p560.ikine6s( transl(0.5, 0.5, 0.5) )
1
9

ans =
0.99920.7873 -1.8455

3.1416 -1.0582 2.1424

Code:
Write down a Matlab program to solve planar RRR robot inverse kinematics
problem having the length parameters L1=4m, L2=3m, L3=2m.
For the following relative pose
(a) 0TH = [1 0 0 9; 0 1 0 0; 0 0 1 0; 0 0 0 1]
(b) 0TH = [0.5 -0.866 0 7.5373; 0.866 0.6 0 3.9266; 0 0 1 0; 0 0 0 1]
(c) 0TH = [0 1 0 -3; -1 0 0 2; 0 0 1 0; 0 0 0 1]
(d) 0TH = [0.866 0.5 0 -3.1245; -0.5 0.866 0 9.1674; 0 0 1 0; 0 0 0 1]
(a) TH0
=
1 0 0 9
0 1 0 0
0 0 1 0
0 0 0 1
dr =
pi/180;
q = anadi.ikine(TH0, [0 0 0], [1 1 0 0 0
1 1])
q
1 =
0 0 0
q = anadi.ikine(TH0, [90 -90 90]*dr, [1 1
1 0 0 0 1])
q
1 =
1.0e-03 *
0.1934 -0.4512 0.2578
anadi.plot(q1, 'workspace', [-8 8 -8 8 -8 8])
>>anadi.fkine(q
1) ans =

1.000
0 -0.0000
0
0.000
0
1.0000
0
0
0 1.0000
0
00 1.0000
(b)

9.000
0
0.000
0
0

TH1 = [0.5 -0.866 0 7.5373; 0.866 0.6 0

3.9266; 0 0 1 0; 0 0 0 1] TH1 =
0.500
0 -0.8660
0.866 0.6000

7.537
0
3
0 3.926

0
6
0
0 1.0000
0
0
00 1.0000
> q1 = anadi.ikine(TH1, [0 0 0], [1 1 0 0 0 1])
matrix not orthonormal rotation matrix
2
0

TH1(1:3,1:3)*TH1(1:3,1:3)' % The above can be proved as


ans =
matrix

% for orthonormal matrix the result should be identity

1.0000 -0.0866 0
-0.0866 1.1100 0
0
0 1.0000
(c)
TH2 = [0 1 0 -3; -1 0 0 2; 0
0 1 0; 0 0 0 1] TH2 =
0 1 0 -3
-1 0 0 2
0 0 1 0
0 0 0 1
>> q2 = anadi.ikine(TH2, [0 0 0], [1 1
0 0 0 1])
0
1])
q2
=
1.5708 1.5708 1.5708
>>anadi.plot(q2)

q = anadi.ikine(TH2, [90 -90 90]*dr,


2 [1 1 0 0
q
2 =
2.8578 -1.5708 -2.8578
>>anadi.plot(q2)

TH3 =
0.8660 0.5000 0 -3.1245
-0.5000 0.8660 0 9.1674
0
0 1.0000 0
0
00 1.0000
> q3 = anadi.ikine(TH3, [0 0 0]*dr, [1 1
0 0 0 1]) q3 =
1.6852
1.6649
2.4095
anadi.fkine(q3)
ans =
0.8660 0.5000 0 -1.6596
-0.5000 0.8660 0 2.3529
0
0 1.0000 0
0
0
0 1.0000

q = anadi.ikine(TH3, [90 -90 90]*dr, [1 1 0


3 0 0 1])
%
for different initial estimate

q
3 =
2
1

1.0e+04 *
-2.67466.4932 -3.7081
>>anadi.fkine(q3)
ans =
0.8769 0.4808 0 -1.6122
-0.4808 0.8769 0 2.3772
1.000
0
0 0
0

0
0
0 1.0000
q3 = anadi.ikine(R, [0 0 0], [2 0 0
0 0 1])

%for different
mask

q3 =
-2.0655 -0.9075

2.4494

>>anadi.fkine(q3)
ans =
0.8660
0.5000
0 -3.1245
0.866
0
-0.5000 0
-5.0238
0
0 1.0000
0
0
0
0 1.0000

>>anadi.plot(q3)
The different result for the same post 0THis due to different usage of Initial
estimate (Q0) and mask

(M).

2
2

Experiment 4
Aim: To perform forward kinematics of 3 DOF planar robot, Stanford
manipulator , puma robot for the given DH parameters using robo
analyzer and calculate the end effectors position for a given set of
joint values using matlab and verify the result with robo analyzer
output.
Theory:
Forward Kinematics: To determine the position and orientation of the endeffector given the values for the joint variables of the robot.
Using Roboanalyser: RoboAnalyzer is a 3D Model Based Robotics Learning
System developed using OpenTK and Visual C#.

The
figures
have
suitably

labeled for proper description. The UI may be explained as follows:

above
been

2
3

1. In this block, one can select the DOF and type of robot from tabs provided
on the far left. The center table shows the DH parameter for the robot.
These can be changes as per the requirement.
2. Here 4 tabs are present. One can see a DH paremeter animation for the
robot in Visualize DH tab. Link Config and EE Config gives the
transformation matrix for a particular link and for base to end-effector
respectively.
3. Once the robot is configured, press the FKin button to completer the
forward kinematic analysis. Then press the play button to view the
trajectory of robot with initial and final position as specified in block 1.
One can also change thetime duration and no. of steps taken by robot for
reaching from initial to final position.
4. Here one can see the actual robot, coordinate axis w.r.t each link, and the
trajectory plotted after hitting the play button.
5. This screen is obtained by hitting the Graph tab on top of the screen.
Here one can see the variation in position for each link separately with
time and the joint parameters such as joint value, velocity, acceleration
etc.
6. This block shows the graph for selected parameter in block 5.

3 DOF planar robot


>> Link

ans =
theta=q, d= 0, a= 0, alpha=
> L(1) = Link([0 0 0.3 0]);
> L(2) = Link([0 0 0.25 pi/2]);
> L(3) = Link([0 0 0.1 0]);

0 (R,stdDH)

L
=
theta=q1,
0,
d=
a=
0.3, alpha= 0 (R,stdDH)
theta=q2,
0,
1.571
d=
a= 0.25, alpha=
(R,stdDH)
theta=q3,
0,
d=
a=
0.1, alpha= 0 (R,stdDH)
>> three_link = SerialLink(L, 'name', 'three
link')
three_link
=
three link (3 axis, RRR, stdDH,
slowRNE)
+- ---------------------- --------- +--------- +
++
---+
| j | theta |
d|
a | alpha |
+- ---------------------- --------- +--------- +
++
---+
| 1|
q1|
0|
0.3|
0|
| 2|
q2|
0| 0.25| 1.571|
| 3|
q3|
0|
0.1|
0|
+- ---------------------- --------- +--------- +
++
---+

grav
=

0 base = 1 0 0 0 tool = 1
000
0
0100
0100
9.81
0010
0010
0001
0001

>> DR=pi/180;
2
4

>> q1=[ 0 90
0]* DR
q1 =
1.570
0
8
0
>> T30_1=fkine(three_link,
q1)
T30_1
=
-0.0000 0.300
0.0000
1.0000 0
- 0.350
1.0000 0.0000 0.0000 0
1.000
0
0
0.0000 0
0
0
0 1.0000

Stanford manipulator
>> Link

ans =
theta=q, d= 0, a= 0, alph a= 0 (R,stdDH)
> L(1) = Link([0 0.762 0 -pi/2])
> L(2) = Link([0 0.393412 0 -pi/2]);
> L(3) = Link([-pi/2 0 0 0 1]);
> L(4) = Link([0 0.2268 0 -pi/2]);
> L(5) = Link([0 0 0 -pi/2]);
> L(6) = Link([0 0.4318 0 0])
L=
theta=q1,

0.762, a=

0,

-1.571

d=
theta=q2,
d=
0.3934, a=
theta= -1.571,
d=q3, a=

alpha= (R,stdDH)
0,
-1.571
alpha= (R,stdDH)
0,
0
alpha=
(P,stdDH)
2
5

theta=q4,
d=
theta=q5,
d=
theta=q6,
d=

0.2268,
a=
0, a=
0.4318,
a=

0, alpha= -1.571
(R,stdDH)
0,
-1.571
alpha= (R,stdDH)
0,
0
alpha=
(R,stdDH)

>> Stan_link = SerialLink(L, 'name', 'Stan link')


Stan_link =
Stan link (6
--- ----------+ + +----------| j | theta |
--- ----------+ + +----------| 1|
q1|
| 2|
| 3|
| 4|
| 5|
| 6|
--+ +

axis, RRPRRR, stdDH, slowRNE)


+------------------+ -+
d|
a | alpha |
+------------------+ -+
0.762|
0| -1.571|
0.3934
q2|
|
0| -1.571|
-1.571|
q3|
0|
0|
0.2268
q4|
|
0| -1.571|
q5|
0|
0| -1.571|
0.4318
q6|
|
0|
0|
-------------------- +---------+ -+
+-----------

grav 0 base = 1 0 0
=
0
tool = 1 0 0 0
0
0100
0 100
9.81
0010
0010
0001
0001
>> q1=[ 0 -pi/2 0.635 0 pi/2 pi/2]
q1 =
-1.5708
0 1.5708
0
0.6350
1.5708
>> T60_1=fkine(Stan_link, q1)
T60_1
=
-0.0000
0.000
1.0000
0 0.8618
1.000
0.0000 0.0000
0 -0.0384
1.000
-0.0000
0
0.0000
0.7620
0
0
0
1.0000

2
6

Puma robot

>> L(1) = Link([0 0.77421 0 -pi/2]);


L=
theta=q, d=
0.7742, a=
0,
alpha=
-1.571 (R,stdDH)
> L(2) = Link([0 0.101592 0.506628 pi]);
> L(3) = Link([0 -0.0381 0.02 -pi/2]);
> L(4) = Link([0 0.267969 0 pi/2]);
> L(5) = Link([0 0 0 -pi/2]);
> L(6) = Link([0 0.05842 0 0])
L
=
theta=q1, 0.7742,
0,
-1.571
d=
a=
alpha= (R,stdDH)
theta=q2, 0.1016,
0.5066,
3.142
d=
a=
alpha=
(R,stdDH)
theta=q3, -0.0381,
0.02,
-1.571
d=
a=
alpha=
(R,stdDH)
theta=q4,
0,
1.571
d= 0.268, a=
alpha= (R,stdDH)
theta=q5,
0, alp
d= 0, a=
ha=
-1.571 (R,stdDH)
theta=q6,
0,
d=0.05842, a= alpha=
0 (R,stdDH)
>> Puma_link = SerialLink(L, 'name', 'Pumalink')
Puma_link
=
Pumalink (6 axis, RRRRRR, stdDH,
slowRNE)
+- ----------- ------------------- +
+ +----------+--+
| j | theta |
d|
a | alp ha

|
+- ----------- ------------ +
+ +----------| 1|
q1| 0.7742|
| 2|
| 3|
| 4|
| 5|

-------+--+
0| -1.571|
0.506
q2| 0.1016|
6| 3.142|
1.571
q3| -0.0381| 0.02| |
q4| 0.268|
0| 1.571|
-1.
q5|
0|
0| 571|
2
7

| 6|

q6| 0.05842|
0|
0|
+----------------+--+---------+-- +
+-----------grav
0 base = 1 0
100
=
0 0 tool = 0
010
0
0100
0
9.8
1
0010
001 0
0001
0001
>> q1=[ 0 0 0 0
0 0]
q1 =
0 0 0 0 0 0
>>
T60_1=fkine(Puma_link,
q1)
T60_1
=
1.0000
0
0 0.5266
1.000
0
0
0 0.1397
1.000
0
0
0 1.1006
0
0
0 1.0000

2
8

Experiment 5

To perform inverse kinematics of 3 DOF planar robot of the


given DH parameters

Aimofhe experiment:

usingroboanalyzer. Derive the

analytical expression for the inverse kinematics problem of a 2 DOF planar robot and
verify the solution from Matlab program.

Inverse Kinematics (IKin) consists of determination of the joint variables


corresponding to a given endeffectors orientation and position. The solution to
this problem is of fundamental importance in order to transform the motion
specifications assigned to the endeffector in the operational space into the
corresponding joint space motions. There may be multiple or no results possible
for a given endeffector position and orientation.

SOLUTIONS OF IKIN
To select a robot and view the solutions of its Inverse Kinematics, the following are the steps

1.
2.
3.
4.
5.
6.

Click on IKin button. It shows a separate window (Figure 16)


Select a Robot
Enter Input parameters
Click on IKin button
View the possible solutions
Click on Show button. It shows the selected solution in 3DModel window. To see this
go back to main window by minimizing IKin window
7. Select any of the obtained solution as initial and final solution
8. Click on OK. This step replaces the initial and final joint values in DH Parameter table

Code:
The solution may be obtained from Matlab using trigonometric analysis or
directly from RoboAnalyzer.
(a) DOF planar robot

2
9

a1=2; a2=2;
a3=1;
phi=pi/3; px = 2.5 + sqrt(3); py = 1 +
sqrt(3)/2;
wx = px - a3*cos(phi);
wy = py-a3*sin(phi); del=wx*wx +
wy*wy;
c2 = (del-a1*a1-a2*a2)/
(2*a1*a2);
s2 = sqrt(1c2*c2);
th21=atan2(s2,c2); th22=atan2(s2,c2);

% Non-zero constant DH
parameters
% Input

% Calculation for
theta2

%Calculation for theta1


s11=
((a1+a2*cos(th21))*wy-a2*sin(th21)*wx)/del;
((a1+a2*cos(th21))*wx+a2*sin(th21)*w
y)/del;
((a1+a2*cos(th22))*wys12
=
a2*sin(th22)*wx)/del;
((a1+a2*cos(th22))*wx+a2*sin(th22)*w
y)/del;
th11 = atan2(s11,c11);
th12=atan2(s12,c12);
th31 = phi-th11-th21; th32=phi-th12% calculation for
th22;
theta3
r2d=180/pi;

% Angle in degrees

th11d= th11*r2d; th12d=th12*r2d; th21 = th21*r2d;


th22d=th22*r2d; th31d=th31*r2d; th32d = th32*r2d;

RoboAnalyzer Output

c11

=
c12
=

3
0

(b) 3R Articulated
> a2=1; a3=1;
>>px=1;py=0;
pz=0;

>>delxy=px*px+py*py; del=delxy+pz*pz;
> th11=atan2(py,px);
> th12=pi+atan2(py,px);
> c3=(del-a2*a2-a3*a3)/(2*a2*a3);
s3=sqrt(1-c3*c3);

> th31=atan2(s3,c3); th32=atan2(-s3,c3);


> s21=(-(a2+a3*cos(th31))*pza3*s3*delxy)/del;
> c21=
((a2+a3*cos(th31))*delxy+a3*s3*pz)/de
l;
> s22=((a2+a3*cos(th31))*pz+a3*s3*delxy)/del
;
> c22=((a2+a3*cos(th31))*delxya3*s3*pz)/del;
> th21=atan2(s21,c21);
th22=atan2(s22,c22);
> th23=pi-th21; th24=pi-th22;
> r2d=180/pi;
> th11d=th11*r2d;
> th12d=th12*r2d;

> th21d=th21*r2d;
> th22d=th22*r2d;
> th23d=th23*r2d;

3
1

> th24d=th24*r2
d;
> th31d=th31*r2
d;
> th32d=th32*r2
d;
> th11d=th11*r2
d
RoboAnalyzer
Output

3
2

Experiment 6
Aim:To study velocity kinematics using Peter Corke tool box.

Theory:
Velocity Kinematics: Velocity Kinematics relates linear and angular
velocities ofthe end-effector to the joint velocities.

>>
mdl_puma560
>> T0 =
p560.fkine(qn)

% obtain transformation matrix

T0 =
0.000
-0.0000 0
1.0000 0.5963
1.000
-0.0000 0
-0.0000-0.1501
-0.0000 -0.0000
-1.0000 -0.0144
1.000
0
0
0
0
>>dq = 1e6;
>>Tp = p560.fkine(qn + [dq 0 0
0 0 0])

% give slight disturbance


% obtain new transformation
matrix

Tp =

0.000
-0.0000 0
1.0000 0.5963
1.000
-0.0000 0
0.0000 0.1500
-0.0000 -0.0000
-1.0000 -0.0144
1.000
0
0
0
0
>> dTdq1 = (Tp T0) / dq
dTdq1
=
0 1.0000 -0.0000 0.1500
0.000
-0.0000 0
1.0000 0.5963
0
0
0
0
0
0
0
0
>> dRdq1 =
dTdq1(1:3,1:3)
dRdq1
=
0 1.0000 -0.0000
-0.0000 1.0000

% obtain the differential

% get the 3x3 differential


rotation matrix

0.000
0
0
0

0
>> R =
T0(1:3, 1:3)

% obtain rotational matrix from


T0

R=
-0.0000

0.0000

1.0000

-0.0000

1.0000 -0.0000

3
3

-1.0000 -0.0000 -0.0000


>> S = dRdq1 * R'

% obtain the skew symmetric matrix

S=
-0.0000 -1.0000 0.0000
1.0000 0.00000.0000
0
0
0
>>
vex(S)
ans
=
0.000
0

% get vector from symmetric


matrix

0.0000
1.0000
>> J =
p560.jacob0(qn)

% get jacobian w.r.t. world


coordinates

J=
0.014
0.1501 4
0.3197
0
0 0
0.000
0.5963 0
0.0000
0
0 0
0.291
0 0.5963 0
0 0
0
0.000
0.000
0 0.0000 0
0.7071 0
1.0000
0.0000 -1.0000 -1.0000 -0.0000 -1.0000 -0.0000
0.000 0.000
-0.7071 0.0000
1.0000 0
0
-0.0000
>>T = transl(1, 0,
% get a transformation matrix with rotation in
0)*troty(pi/2)
Y direction
T=
1.000
0.0000
0 0
1.0000
0 1.0000
0
0
1.0000
0 0.0000 0
1.000
0
0
0
0
>> J =
tr2jac(T)

% get new jacobian matrix w.r.t. new frame

J=
0.0000
0 -1.0000 0 1.0000
0
0 1.0000
0
0
0 1.0000
0.000
1.0000
0 0
0 0.0000
0
0
0
0 0.0000
0 1.0000

0
0

0
0

0
0 1.0000
0
0 1.0000
0 0.0000

>>vB = J*[1 0 0 0 0 0]';


>>vB'
ans =
0.0000

0 1.0000

0
3
4

>>p560.jacob
n(qn)
ans
=
-0.0000 -0.5963
-0.2910
0 0
0
0.5963 0.0000 0.0000
0
0
0
0.1500 0.0144 0.3197
0
0
0
0.707
-1.0000
0
0
1
0
0
- 1.000
-0.0000 1.0000 1.0000 0.0000 0
0.000
0.000
-0.0000 0.0000 0
0.7071 0
>>p560.jacob0(qn,
'eul')
ans
=

% get jacobian matrix w.r.t. endeffector

0
1.000
0
% get jacobian matrix for Eular
values

0.1501 0.0144 0.3197


0
0
0
0.5963 0.0000 0.0000
0
0
0
0 0.5963 0.2910
0
0
0
-0.7071 0.0000
1.0000 0.0000 0.0000
0.0000
- 1.000
0.0000 1.0000 1.0000 0.0000 0
0
0.000
0.000 1.000
-0.0000 0.0000 0
0.7071 0
0
To study inverse kinematics and develop a MATLAB program using peter
coorke toolbox to calculate a jacobianmatrix for planar 3R robot, given the
robots length l1=4m, l2=3m, l3=2m and initial joint angles (1,2,3)=[10 o,
20o,30o].
>>L(1) = Link([0 0 4 0 ]);
>>L(2) = Link([0 0 3 0 ]);
>>L(3) = Link([0 0 2 0 ])
L=
0
(R,stdDH
0, a=
)
0
theta=q2,
3,
(R,stdDH
d=
0, a= alpha=
)
0
theta=q3,
2,
(R,stdDH
d=
0, a= alpha=
)
>>two_link_jacob = SerialLink(L, 'name',
'two_link_jacob') two_link_jacob =
two_link_jacob (3 axis, RRR, stdDH, slowRNE)
theta=q1,
d=

4,
alpha=

----------------------------------------------+ +
++
+
+
| j | theta |
d|
a | alpha |
----------------------------------------------+ +
++
+
+
| 1|
q1|
0|
4|
0|
| 2|
q2|
0|
3|
0|
| 3|
q3|
0|
2|
0|
----------------------------------------------+ +
++
+
+
grav 0 base = 1 0 tool = 1 0 0
=
00
0
0
0100
0100
9.81
0010
0010
000
1
0001
>>dr= pi/180
3
5

dr =
0.017
5

>> q0 = [10 20 30]*dr


q0 =
0.174
0.523
5
0.3491
6
>>jacobian =
jacob0(two_link_jacob, q0)
jacobian =
-3.9266 -3.2321 -1.7321
7.537 3.598 1.000
3
1
0
0
0
0
0
0
0
0
0
0
1.000 1.000 1.000
0
0
0

3
6

Experiment 7
Aim: To study forward dynamics of a 3DOF of 3R robot.
Theory:

Dynamics:Robot dynamics is concerned with the relationship between


the forces acting on a robot mechanism and the accelerations they
produce. Forward dynamics works out the accelerations given the forces.
>>mdl_puma560
Q = p560.rne(qn, qz,
%joint torque required for the robot R to achieve the
qz)
specified joint
position Q, velocity QD and
acceleration QDD.
Q=
0.000 31.639
0
9 6.0351 0.0000 0.0283
0
Q = p560.rne(qn, qz, qz, [0 0
9.81]')
% [0 0 9.81] gives the gravitational vector
Q=
0.000 31.639
0
9 6.0351 0.0000 0.0283
0
[q, q1, q2] = jtraj(qz,
% Compute a joint space trajectory between
qr, 10)
two points

p560.links(1).dyn

3
7

p560.gravi
ty'

% Default gravity assumed by


puma robot

ans =
0

0 9.8100
% Torque required to maintain the robot in
position

>>p560.gravload(qn)
ans =
-0.0000 31.6399

6.0351 0.0000

0.0283

p560.gravity= p560.gravity/6;
>>p560.gravload(qn)
ans =
0.00005.2733 1.0059 0.0000 0.0047

p560.base = trotx(pi);

% Rotate robot vertically

p560.gravload(qn)
ans =
0.0000 -5.2733 -1.0059 -0.0000 -0.0047
>> Q = p560.gravload(qs)
Q = -0.0000 46.0069

% Robot in qs position

8.7722 0.0000

0.0283

>> Q = p560.gravload(qr)
Q=
0 -0.7752

0.2489

Q. To study forward dynamics of 3 DOF 3R robot and develop a MATLAB


program using peter coorke toolbox to calculate the forward dynamics for
planar 3R robot, given the robots parameters as L 1=4m, L2=3m, L3=2m and
m1=20Kg, m2=15Kg, m3=10Kg and =0.5Kgm2, =0.2Kgm2,
=0.1Kgm2.
Ignore gravity by assuming that the gravity acts in a direction normal to the
plane of motion. Using Peter Coorke toolbox and Matlab commands solve the
forward dynamics problem (i.e. with the
available driving joint torques an joint angles and initial joint rates). Perform
this simulation for 4
sec.

={

, ,

}={

, ,

3
8

={

}=

L(1 = Link([0 0 4 0 0 20 2 0. 0 0 0 0 1 0 0 0])


) 0000
5
L(2 = Link([0 0 3 0 0 15 1.5 0 0 0 0 0.2 0 0
) 0 0 1 0 0 0])
L(3 = Link([0 0 2 0 0 10 1 0.1 0 0 0 0 1
) 0000
0 00])
={

L(1).mdh=1;

}=

% Link1

% Link2 with a=3m, m=15kg,


I=0.2kgm2
% Link3 with a=2m, m=10kg,
I=0.1kgm2

% Use modified DH parameters

L(2).mdh=1;
L(3).mdh=1;
two_link_sau = SerialLink(L, 'name', 'two link_sau');
two_link_sau =
twolink_sau (3 axis, RRR, stdDH, slowRNE)
----------------------------------------------+ +
++
+
+
| j | theta |
d|
a | alpha |
----------------------------------------------+ +
++
+
+
| 1|
q1|
0|
4|
0|
| 2|
q2|
0|
3|
0|
| 3|
q3|
0|
2|
0|
----------------------------------------------+ +
++
+
+
grav 0 base = 1 0 tool = 1 0 0
=
00
0
0
0100
0100
9.81
0010
0010
000
1
0001
t0=0; tf=4; N=40; dt=(tf-t0)/N;
>>dt
dt =
0.1000
t=[t0:dt:tf];
dr= pi/180;
Q0 = [-60 90 30]*dr
Q0 =

with a=4m, m=20kg, I=0.5kgm

% Specify initial robot position

-1.04721.5708 0.5236
[tf Q Qd] = two_link_sau.fdyn(1, [20;5;1], Q0, [0.5 0 0]) % Do forward
dynamics for 1s & [20;5;1] torque

3
9

4
0

Experiment8
AIM:TostudyPeterCorkeComputerVisiontoolbox
DifferentFunctionsandExplanation:
//SpectralRepresentationofLight//

//definearangeofwavelengths//
>>lambda=[300:10:1000]*1e9;
//inthiscasefrom300to1000nm,andthencomputetheblackbodyspectra//>>for
T=1000:1000:6000
>>plot(lambda*1e9,blackbody(lambda,T));holdall>>end

asshowninFig.8.1a.

//Thefilamentoftungstenlamphasatemperatureof2600Kandglowswhitehot.TheSunhasasurface
temperatureof6500K.ThespectraofthesesourcesarecomparedinFig.8.1b.//
>>lamp=blackbody(lambda,2600);
>>sun=blackbody(lambda,6500);
>>plot(lambda*1e9,[lamp/max(lamp)sun/max(sun)])

//The Suns spectrum at ground level on the Earth has been measured and tabulated//
>>sun_ground=loadspectrum(lambda,'solar.dat');
>>plot(lambda*1e9,sun_ground)
andisshowninFig.8.3a.

4
1

AIM:Absorption
> [A, lambda] = loadspectrum([400:10:700]*1e9,
'water.dat');//Aistheabsorptioncoefficient//
> d=5;//disthepathlength//
> T = 10.^(A*d);// Transmission T is the inverse of absorption//
>>plot(lambda*1e9,T);

whichisplottedinFig.8.3b.

Reflection
//thereflectivityofaredhousebrick//
>>[R,lambda]=loadspectrum([100:10:10000]*1e9,'redbrick.dat');
>>plot(lambda*1e6,R);

4
2

whichisplottedinFig.8.4.Weseethatitreflectsredcolorsmorethanblue.

//TheilluminanceoftheSuninthevisibleregion//>>lambda
=[400:10:700]*1e9;%visiblespectrum>>E=
loadspectrum(lambda,'solar.dat');
//atgroundlevel.Thereflectivityofthebrickis//
> R = loadspectrum(lambda,'redbrick.dat'); //and
thelightreflectedfromthebrickis//
> L=E.*R;
>>plot(lambda*1e9,L);

//whichisshowninFig.8.5.Itisthisspectrumthatisinterpretedbyoureyesasthecolorred.//

Color
//TheluminosityfunctionisprovidedbytheToolbox//
>>human=luminos(lambda);
>>plot(lambda*1e9,human)//and
isshowninFig.8.6a.//

4
3

//Thespectralresponsesoftheconescanbeloadedandshowninfig8.7.b//>>cones=
loadspectrum(lambda,'cones.dat');
>>plot(lambda*1e9,cones)

ReproducingColors
//cmfrgbfunctionreturnthecolormatchingfunctions//
>>lambda=[400:10:700]*1e9;
>>cmf=cmfrgb(lambda);
>>plot(lambda*1e9,cmf);

4
4

>>orange=cmfrgb(600e9)//createthesensationoflightat600nm(orange)//orange=

2.87170.30070.0043
>>green=cmfrgb(500e9)
green=
0.29500.49060.1075
>>w=green(1);
>>white=[www];
>>feasible_green=green+white
feasible_green=
00.78560.4025
>>white
white=
0.29500.29500.2950
//TheToolboxfunctioncmfrgbcanalsocomputetheCIEtristimulusforanarbitraryspectral
response.//
>>RGB_brick=cmfrgb(lambda,L)
RGB_brick=0.61370.14160.0374
ChromaticitySpace

//TheToolboxfunctionlambda2rgcomputesthecolormatchingfunction//
>>[r,g]=lambda2rg([400:700]*1e9);
>>plot(r,g)
>>rg_addticks

>>primaries=cmfrgb([700,546.1,435.8]*1e9);
>>plot(primaries(:,1),primaries(:,2),'d')

4
5

//chromaticityofthespectralgreencolor//
>>green_cc=lambda2rg(500e9);green_cc=
0.97331.6187
>>plot2(green_cc,'s')

>>white_cc=tristim2cc([111])//ofwhitecolor//
white_cc=
0.33330.3333
>>plot2(white_cc,'*')

ColorNames
>>colorname('?burnt')ans
=
'burntsienna''burntumber'
>>colorname('burntsienna')ans
=
0.54120.21180.0588
>>bs=colorname('burntsienna','xy')bs=

0.52580.3840
>>colorname('chocolate','xy')ans
=
0.50920.4026
>>colorname([0.540.200.06])ans
=
burntsienna
>>colorname(bs,'xy')
ans=
'burntsienna'

OtherColorSpaces
//Thefunctioncolorspacecanbeusedtoperformconversionsbetweendifferentcolorspaces.
//
>>colorspace('RGB>HSV',[1,0,0])ans=

4
6

011
>>colorspace('RGB>HSV',[0,1,0])ans=

12011
>>colorspace('RGB>HSV',[0,0,1])ans=

24011

//thesaturationis1,thecolorsarepure,andtheintensityis1//

>>colorspace('RGB>HSV',[0,0.5,0])ans=

120.00001.00000.5000

//intensitydropsbuthueandsaturationareunchanged//

//ImageFormation//
cam=CentralCamera('focal',0.015); //creatingamodelofacentralperspectivecamera//
>>P=[0.3,0.4,3.0]';
>>cam.project(P);

ans=

0.0015
0.0020
cam.project(P,'Tcam',transl(0.5,0,0))

ans=

0.0040
0.0020

//movingcamera0.5mtotheleft//

cam=CentralCamera('focal',0.015,'pixel',10e6,...

//displays the parameters of the imaging model

//
'resolution',[12801024],'centre',[640512],'name','mycamera')name:
mycamera[centralperspective]

focallength:0.015
pixelsize:(1e05,1e05)
principalpt:(640,512)number
pixels:1280x1024

4
7

T:

1000
0100
0010
0001
>>cam.project(P);
ans=

790

712
>>cam.C
ans=

1.0e+03*
1.5000
0
0

0.6400

1.5000
0

0.5120
0.0010

0
0

>>cam.fov()*180/pi

//fieldofview//

ans=

46.2127 37.6930
//wecreatea33gridofpointsinthexyplanewithoverallsidelength0.2mandcentredat(0,0,1),returns3*9matrix//
>>P=mkgrid(3,0.2,'T',transl(0,0,1.0));
>>P(:,1:4);

ans=

0.1000

0.10000.1000

0.1000
1.0000

0
1.0000

>>cam.project(P)

0.1000
1.0000

0
0.1000
1.0000

//Theimageplanecoordinatesoftheverticesare//

ans=

490 490

490 640 640 640 790 790 790

362

662

512

362

512 662

362

512

662

>>cam.plot(P)h
=

173.0022

4
8

>>Tcam=transl(1,0,0.5)*troty(0.9);//obliqueviewoftheplane//>>cam.plot(P,
'Tcam',Tcam)
>>cam.project([1000]','Tcam',Tcam)ans=

1.0e+03*
1.8303

0.5120
>>p=cam.plot(P,'Tcam',Tcam)

>>p(:,1:4)

//obliqueviewingcasetheimageplanecoordinates //

ans=

887.7638 887.7638 887.7638 955.2451


364.3330

512.0000

659.6670

374.9050

>>cube=mkcube(0.2,'T',transl([0,0,1]));//projectionofcube //
>>cam.plot(cube);
>>[X,Y,Z]=mkcube(0.2,'T',transl([0,0,1.0]),'edge');

>>cam.mesh(X,Y,Z)
>>cam.T=transl(1,0,0.5)*troty(0.8);//obliqueview//>>cam.mesh(X,
Y,Z,'Tcam',Tcam);

4
9

/ll////CameraCalibration//
//lHomogeneousTransformationApproach//

>>P=mkcube(0.2);

>>T_unknown=transl(0.1,0.2,1.5)*rpy2tr(0.1,0.2,0.3);
>>cam=CentralCamera('focal',0.015,...'pixel',10e6,'resolution',[12801024],'centre',[512512],...'noise',0.05);

> p=cam.project(P,'Tobj',T_unknown)
> C=camcald(P,p)
maxmresidual0.067393pixels.

C=

883.1620240.0720531.4419612.0432

259.3786 994.1921 234.8182


0.1043

0.0985 0.6494

712.0180
1.0000

// Decomposing the Camera Calibration Matrix //


est=invcamcal(C)est
=

name:invcamcal[centralperspective]focal
length:1504
pixelsize:(1,0.9985)
principalpt:(518.6,505)

Tcam:

0.936950.290370.194460.08339
0.312330.945390.0932080.39143
0.156770.148070.976471.4671
0

>>est.f/est.rho(1)

//ratioofestimatedparametersofcamera//

ans=
1.5044e+03

>>cam.f/cam.rho(2)

//ratiooftrueparametersofcamera//

ans=
1.500e+03

>>T_unknown*est.T

//relativeposebetweenthetrueandestimatedcamerapose//

ans=

0.7557 0.51630.40310.0000
0.60370.78770.12270.0001
0.25410.33610.90690.0041
0
0
0
1.0000

5
0

>>plot_sphere(P,0.03,'r')
>>plot_frame(eye(4,4),'frame','T','color','b','length',0.3)
>>est.plot_camera()

//PoseEstimation//
>>cam=CentralCamera('focal',0.015,'pixel',10e6,...'resolution',[12801024],'centre',[640512]);//createa
calibratedcamerawithknownparameters//

>>P=mkcube(0.2);

//cubedetermination//

>>T_unknown=transl(0,0,2)*trotx(0.1)*troty(0.2)//poseestimationw.r.tcamera//T_unknown=

0.9801

0.0198
0.1977
0

0.9950
0.0998
0

0.1987
0.0978
0.9752
0

0
0
2.0000
1.0000

>>p=cam.project(P,'Tobj',T_unknown);
>>T_est=cam.estpose(P,p)
T_est=

0.9801

0.0000

0.19870.0000

0.0198
0.1977
0

0.9950
0.0998
0

0.0978 0.0000
0.9752
2.0000
0
1.0000

//estimaterelativeposeoftheobject//

//lNonPerspectiveImagingModels//
>>cam=FishEyeCamera('name','fisheye',...'projection','equiangular',...'pixel',10e6,...'resolution',[12801024]);//
creatingfisherycamera//
>>[X,Y,Z]=mkcube(0.2,'centre',[0.2,0,0.3],'edge');//creatingedgebasedmodel//
>>cam.mesh(X,Y,Z)

5
1

l//// lCatadioptric Camera //


>>cam=CatadioptricCamera('name','panocam',...'projection','equiangular',...maxangle',pi/4,...'pixel',10e6,
...'resolution',[12801024])

>>[X,Y,Z]=mkcube(1,'centre',[1,1,0.8],'edge');
>>cam.mesh(X,Y,Z)

//sphericalcamera//
>>cam=SphericalCamera('name','spherical')
>>[X,Y,Z]=mkcube(1,'centre',[2,3,1],'edge');
>>cam.mesh(X,Y,Z)

// Mapping Wide-Angle Images to the Sphere //


>>fisheye=iread('fisheye_target.png','double','grey');
>
>
>

u0=528.1214;v0=384.0784;//cameracalibrationparameters//
l=2.7899;
m=996.4617;

5
2

> [Ui,Vi]=imeshgrid(fisheye);//domainofinputimage//
> n=500;
>>theta_range=(0:n)/n*pi/2+pi/2;
>>phi_range=(n:2:n)/n*pi;

>

[Phi,Theta]=meshgrid(phi_range,theta_range);//nisnoofgridcells//

> r=(l+m)*sin(Theta)./(l+cos(Theta));
>>U=r.*cos(Phi)+u0;//Cartesiancoordinatesintheinputimage//>>V=
r.*sin(Phi)+v0;
>>spherical=interp2(Ui,Vi,fisheye,U,V);//applyingwrap//
>>idisp(spherical)

//lSyntheticPerspectiveImages//
>>W=1000;
>>m=W/2/tan(45/2*pi/180)//45fiedofview//m=

1.2071e+03
>

l=0;

> u0=W/2;v0=W/2;
>>[Uo,Vo]=meshgrid(0:W1,0:W1);//Thedomainoftheoutputimage//>>r=
sqrt((Uou0).^2+(Vov0).^2);//polarcoordinates//
>>phi=atan2((Vov0),(Uou0));>>Phi_o=
phi;//sphericalcoordinates//
>>Theta_o=piacos(((l+m)*sqrt(r.^2*(1l^2)+(l+m)^2)l*r.^2)./(r.^2+(l+m)^2));
>>idisp(perspective)
>>spherical=sphere_rotate(spherical,troty(1.2)*trotz(1.5));

5
3

1Obtainganimagefromafile
Thetoolboxconsistofireadfunctiontoreadanimage
>>street=iread('street.png'); %Whichreturnsamatrix
>>about(street)
%Givesdetailsrelatedtostreet
street[uint8]:851x1280(1089280bytes)

>>street(200,300) %Thepixelatimagecoordinate(300,200)
ans=
42
>>street_d=idouble(street);%doubleprecisionnumberintherange[0,1].
>>about(street_d)
>>street_d[double]:851x1280(8714240bytes)

>>street_d=iread('street.png','double');%specifythisasanoptionwhile
loading
>>idisp(street);

%todisplayanimageisidisp

5
4

>>flowers=iread('flowers8.png');
>>about(flowers)
flowers[uint8]:426x640x3(817920bytes)
>>pix=flowers(276,318,:)%pixelat(318,276)
ans(:,:,1)=
57ans(:,:,2)
=91
ans(:,:,3)=
198

%Thepixelvalueis

5
5

>>idisp(flowers(:,:,1))

2ImagesfromanAttachedCamera
Mostlaptopcomputerstodayhaveabuiltincameraforvideoconferencing.Forcomputers
withoutabuiltincameraanexternalcameracanbeeasilyattachedviaaUSB.Themeans
ofaccessingacameraisoperatingsystemspecific
andtheToolboxprovidesasimpleinterfacetoacameraforMacOS,LinuxandWindows.
% A list of all attached cameras and their capability can be obtained by
>>VideoCamera('?')
% Weopenaparticularcamera
>>cam=VideoCamera('name')
% which returns an instance of a VideoCamera object that is a subclass of the
%ImageSourceclass.
% Thedimensionsoftheimagereturnedbythecameraaregivenbythesizemethod
>>cam.size()
% animageisobtainedusingthegrabmethod
5
6

>>im=cam.grab();

%whichwaitsuntilthenextframebecomesavailable.

3ImagesfromaMovieFile
TheToolboxsupportsreadingframesfromamoviefilestoredinanyofthepopular
formatssuchasAVI,MPEGandMPEG4.Forexamplewecanopenamoviefile
>>cam=Movie('traffic_sequence.mpg');720
x576@2.999970e+01fps
350frames

%Thismoviehas350framesandwascapturedat30framespersecond.
%Thesizeofeachframewithinthemovieis
>>cam.size()
ans=
720576
% andthenextframeisreadfromthemoviefileby>>im
=cam.grab();
>>about(im)
im[uint8]:576x720x3(1244160bytes)
% whichisa720576colorimage.
% Withthesefewprimitiveswecanwriteaverysimplemovieplayer1while
1
2im=cam.grab;
3ifisempty(im),break;end
4image(im)
5end
%wherethetestatline3istodetecttheendoffile,inwhichcasegrabreturnsanempty
matrix.

4ImagesfromCode
%TheToolboxfunctiontestpatterngeneratessimpleimageswithavarietyofpatterns
includinglines,gridsofdotsorsquares,intensityrampsandintensitysinusoids.
>>im=testpattern('rampx',256,2);
>>im=testpattern('siny',256,2);
>>im=testpattern('squares',256,50,25);
5
7

>>im=testpattern('dots',256,256,100);

Wecanalsoconstructanimagefromsimplegraphicalprimitives.Firstwecreatea

blankcanvascontainingallblackpixels(pixelvalueofzero)
>>canvas=zeros(200,200);
%thenwecreatetwosquares

> sq1=0.5*ones(40,40);
> sq2=0.9*ones(20,20);
Thefirsthaspixelvaluesof0.5(mediumgrey)andis4040.Thesecondissmaller
(just2020)butbrighterwithpixelvaluesof0.9.Nowwecanpastetheseontothe
canvas
>>canvas=ipaste(canvas,sq1,[20,40]);
>>canvas=ipaste(canvas,sq2,[60,120]);
>>circle=0.6*kcircle(30);
%ofradius30pixelswithagreyvalueof0.6.
>>size(circle)
ans=
6161
5
8

>>canvas=ipaste(canvas,circle,[100,30]);%Finally,we
drawalinesegmentontoourcanvas>>canvas=
iline(canvas,[30,40],[150,190],0.8);
%whichextendsfrom(30,40)to(150,190)anditspixelsareallsetto0.8.
>>idisp(canvas)

5MonadicOperations
MonadicimageprocessingoperationsresultinanimageofthesamesizeWHasthe
inputimage,andeachoutputpixelisafunctionofthecorrespondinginputpixel.
SinceanimageisrepresentedbyamatrixanyMATLABelementwisematrixfunction
oroperatorcanbeapplied,forexamplescalarmultiplicationoraddition,orfunctionssuch
absorsqrt.
Thedatatypeofeachpixelcanbechanged,forexamplefromuint8(integerpixels
intherange[0,255])todoubleprecisionvaluesintherange[0,1]>>imd=
idouble(im);
>>im=iint(imd);
Acolorimagehas3dimensionswhichwecanalsoconsiderasa2dimensionalimage
whereeachpixelvalueisa3vector.
5
9

Amonadicoperationcanconvertacolorimagetoagreyscaleimagewhereeachoutput
pixelvalueisascalarrepresentingtheluminanceofthecorrespondinginputpixel
>>grey=imono(flowers);
Theinverseoperationis
>>color=icolor(grey);
%thehistogramofthestreetsceneiscomputedanddisplayedby
>>ihist(street)

>>[n,v]=ihist(street);
%wheretheelementsofnarethenumberoftimespixelsoccurwiththevalueof%the
correspondingelementofv.

>>ihist(street,'cdf');

6
0

%TheToolboxfunctionpeakwillautomaticallyfindthepeaks
>>peak(n,v)'
ans=
840435119717602675213
8892218176153108111147113119
121126130138230244

Wecanidentifythepixelsin
thispeakbyalogicalmonadicoperation
>>shadows=(street>=30)&(street<=80);
>>idisp(shadows)

>>im=istretch(street);

%ensuresthatpixelvaluesspanthefullrange_whichiseither[0,1]or[0,255]
%Amoresophisticatedversionishistogramnormalizationorhistogram
equalization
>>im=inormhist(street);
%Suchimagescanbegammadecodedbyanonlinearmonadicoperation>>im=
igamma(street,1/0.45);
whichraiseseachpixeltothespecifiedpower,or>>im
=igamma(street,'sRGB');
>>idisp(street/64)

6
1

6
2

Experiment 9
AIM: To study Object tracking using Kalman filter using matlab.
Theory:
The Kalman filter keeps track of thee estimated state of the system and the
variance or uncertainty of
the estimate. The estimate is updated using a state transition model and measure
ments.
denotes the estimate of the syste m's state at time step k before the
k-th measurem ent zk has been

taken into account;

is the corresponding uncertainty.

The Kalman filter, also known as linnear quadratic estimation (LQE), is an algorithm
tha t uses a series of measurements observed over time, containing noise (random
variations) and othe r inaccuracies, and produces estimates of unknow n variables
that tend to be more precise than tho se based on a single measurement alone.
More f ormally, the Kalman filter operates recursively on streams of noisy input data
to produce a statistically optimal estimate of the underlying system state.

The algorithm works in a two-st ep process. In the prediction step, the Kalman
filter produces estimates of the current state variables, along with their
uncertainties. Once the outcome of the next measurement (necessarily
corrupted with some amount of error, including ra ndom noise) is observed,
these estimates are updated using a weighted average, with more weight being
given to estimates with higher certainty. Because of the algorithm's recursive
nature, it can run in real time using only the present inpu t measurements and
the previously calculated state and its uncertainty matrix; no additional p ast
information is required.

Kalman filter algorithm


State model prediction
equation XPred = A*X +
B.*a
Prediction of state model covar iance
statePredCov = A*stateCov*A' +
prcNoiseVar
Prediction of observation mode l
covariance
obsCov
=
C*statePredCov*C' + obsNoiseVar

Determine Kalman gain


kalGain = statePredCov * C' * (obsCov)^(-1)
Observation
vector
prediction
zPred
=
C*XPred
zError = z - zPred
State estimation equation
XEst = XPred + kalGain *
zError

Estimated state covariance


stateCov = (eye(4) - kalGain * C) * statePredCov

CODE:
6
3

% object

tracking

using

kalman filter close all


%clear workspace
clear
all clc
% read a video
vidObject =
VideoReader('car-2.avi'); %
determine no. frames
nFrames = vidObject.NumberOfFrames;
% detremine length and width
of

frame

len

vidObject.Height;
wid = vidObject.Width;
sumFrame = zeros(len,
wid);

%kalman filter variables


dt = 0.1;% time between 2
frames % initialize state
model matrix

A = [1 0 dt 0; 0 1 0 dt;0 0 1 0;0 0 0 1];


B = [dt^2/2; dt^2/2; dt; dt];
% initiaize
acceleration a =
0.00;
% initilize observation model
matrix C = [1 0 0 0; 0 1 0
0];
%noise in state model
prcNoise = [10.0; 10.0; 1.0; 1.0];
%variance of state model noise,noise is gaussian in nature
and have
%zero mean
prcNoiseVar = [(dt^4/4)*(prcNoise(1)^2) 0 (dt^3/2)*...
(prcNoise(1)*prcNoise(3)) 0; 0 (dt^4/4)*(prcNoise(2)^2) 0
...

(dt^3/2)*(prcNoise(2)*prcNoise(4));
(dt^3/2)*(prcNoise(1)*prcNoise(3))...

0 (dt^2)*(prcNoise(3)^2) 0; 0
(dt^3/2)*(prcNoise(2)*prcNoise(4))...
0 (dt^2)*(prcNoise(4)^2)];
% initialize state model covariance equal to state noise
variance for the first

6
4

% iteration only
stateCov =
prcNoiseVar;
statePredCov = zeros(4,4);% state prediction
covariance obsNoise = [0.01; 0.01]; % noise
in observation model

%variance of observation model noise,noise is gaussian in


nature and have
%zero mean
obsNoiseVar = [obsNoise(1)^2 0;0 obsNoise(2)^2];
obsCov = [0 0; 0 0]; % initialize observation model
covariance
% state

vector

consist

of

position

and

velocity of object X = [0; 0; 0; 0];


XPred = [0; 0; 0; 0]; % predicted
state vector XEst = [0; 0; 0; 0]; %
estimated state vector

z = [0; 0]; % observation vector i.e sensor


data(position of object) zPred = [0; 0]; % predicted
observation vector
%cnd is zero till object is not detected for the first time.
cnd = 0;
bboxPred =
zeros(1,4);
bboxEst =
zeros(1,4); box =
zeros(3,4); absent
= 0;

for k = 1 : nFrames
%

read frames from

video frame =
read(vidObject,k);
grayFrame =
rgb2gray(frame);

sumFrame = sumFrame + double(grayFrame);

background

sum

of

obtained

frames

by

averaging

background

(sumFrame ./ k);
object = false(len,wid);
% binary

image

background

consist

subtraction

detected

object

obtained

object(abs(double(grayFrame)

by
-

background)>10) = 1;
% morphological operation to remove clutter from binary
image
object = imopen(object,strel('square',3));

6
5

object =
imclose(object,strel('square',12));
object = imfill(object,'holes');

% determine detected object size


and location blobAnalyser =
vision.BlobAnalysis;
blobAnalyser.MinimumBlobArea =
100;
[area,centroid,bbox]
step(blobAnalyser,

object);

=
if(centroid

~= 0)
cnd = cnd
+ 1; end
%

kalman algorithm

if(cnd == 1)
%

for the first time object detection, initialize


state vector

with sensor observation data

X(1) = centroid(1);
X(2) =
centroid(2);
elseif(cnd> 1)
if(centroid ~= 0)
if(absent==1)
X(1:2)=centroid(
1:2); end

end
%

state

model

prediction

equation XPred = A*X + B.*a;


%

prediction of state model

covariance statePredCov =
A*stateCov*A' + prcNoiseVar;
%

prediction

of

observation

model

covariance obsCov = C*statePredCov*C'


+ obsNoiseVar;

determine kalman gain

kalGain = statePredCov * C' *


(obsCov)^(-1); % observation
vector prediction
zPred =
C*XPred;
if(centroid
~= 0)

6
6

z(1:2) =
centroid(1:2);
zError = z zPred;

absent = 0; % when object is


present in frame else
absent = 1; % when object is
absent in frame end
%

state

estimation

equation XEst = XPred +


kalGain * zError;

estimated state covariance

stateCov = (eye(4) - kalGain * C) *


statePredCov; X = XEst;
if(bbox ~= 0)
bboxPred(1:2) = int32(XPred(1:2))(bbox(3:4)')./2; bboxEst(1:2) =
int32(XEst(1:2))-(bbox(3:4)')./2;
box(1,1:4) = bbox(1:4);
box(2,3:4)

bbox(3:4);
box(3,3:4)

bbox(3:4); else
box(1,1:2) = z(1:2);
bboxPred(1:2) =
XPred(1:2);
bboxEst(1:2) =
XEst(1:2); end
box(2,1:2) =
bboxPred(1:2);
box(3,1:2) =
bboxEst(1:2); end
subplot(3,1,1);
% blue for detected position of object, red for predicted position
% of object, green for estimated position of object

frame = insertShape(frame, 'rectangle', box, 'Color',


{'blue','red','green'}); imshow(frame);
title('output:blue,red,green bounded box for
observation,prediction,estimated state'); subplot(3,1,2);

6
7

imshow(uint8(background));
title('background');
subplot(3,1,3);
imshow(object);
title('binary image of detected
objects');

pause(0.001);
end
close all
OUTPUT:

6
8

Experiment 10
AIM: To study Object Tracking using Optical Flow.
THEORY:
In optical flow, motion is estimated as instantaneous image velocities or
discrete image displacements. The method calculates the motion between two
video sequences that are taken at two time instances t and t+t at every pixel
position. It is based on local taylor series approxi mation of image signal i.e. they
used partial derivatives with respect to spatial and temporal changes.

For a 2D-time dimension video sequence, a pixel at location (x, y, t)


having intensity I(x, y, t) have moved x, y and t between the two video
sequenc e. In this we consider a constrain that the brightness remains
unaltered at different position i.e.
I(x, y, t) = I(x+x , y+y , t+t)
Assuming the movement to be small and using taylor series we obtain
I(x+x , y+y , t+t) = I(x, y, t) +
terms

x +

y +

z + Higher order

Thus from above equation


x + y + z = 0
This result in

=0

+
+ =0
Where and are the x and y components of velocity or optical flow of
I(x, y, t) and ,
,

are derivatives of video sequence at (x,y,t) in the corresponding direction.


We can also use derivation of intensity I and I as follow
+

=y

The above equation is solved to determine the optical flow vectors in two
directions x and y.
Fig. 1 show the simulink model to track a moving object in a given video. The
video used in this model is viptraffic.avi consisting of moving cars on a road
scenario.
1. The initial block is the scource block, which is used to read a multimedia file.
6
9

2. The second block is used to convert the RGB values to Grayscale


intensity values.
3. The Optical Flow block is used to obtain the velocity vectors in x and y
direction in complex conjugate form.
4. The forth block is used thresholding to obtain the segmented objects in
the video and to obtain the bounded box.
5. The display box is used to display the original video, optical flow vectors,
the segmented objects and the bounding box on the original video.
As in Fig 2. the Thresholding and Region Filtering Sub-Block initially obatins the
magnitude of the velocity vector and then a threshold is computed by
averaging all the velocity vectors of the frame and using this threshold image is
obtained. The image is then passed to the median filter to remove the noise in
the image. The Blob Analysis Toolbox is used on this threshold image to obtain
the bounding box of the object as given in Fig 3.
As in Fig 4.the display box consist of 4 inputs: input video, thresholded video,
optical flow vectors and the tracked output result.
The input video and the threshold image is directly displayed. The optical flow
vectors are passed through optical flow line function block to give the
dimensions of the optical flow vectors. These dimensions are then ploted on the
original video and displayed. The bounding box are counted to obtain the
number of cars and the bounding box is diplayed on the original video to obatin
the result.
SIMULINK MODEL:

Fig.1 Main Simulink Model

7
0

Fig.2 Thresholding and Region Filtering


Subblock

Fig.3 Region Filtering


Subblock

7
1

Fig.4 Display Result Subblock

OUTPUT:

7
2

(a)

(b)

(c)

(d)

(e)

Fig.5 (a) Nth Frame of the video (b)N+1th Frame of the video (c) Optical flow
velocity vectors for frame (a) and (b) , (d) Thresholding for object detection and
(e) Object detection

7
3

Вам также может понравиться