Академический Документы
Профессиональный Документы
Культура Документы
Vivek A. Sujan
JPL Objective
2
JPL Motivation
A new generation of worker robots is required for:
• Exploration and development of space
• Mining and other underground operations
• Environment restoration
• Assisting and supporting humans
• Military applications
• handling hazardous waste
• moving large weapons
3
JPL Challenge
Currently Robots work as individuals
- fixed bases, factory settings
- unlimited sensory data
- simple tasks
Cliffbot
Technical Problems
• system physical interactions
• complex terrain & unstructured environment
• limited sensing (due to occlusions, etc.)
4
JPL In an Ideal World
Cooperative field task:
• multiple cooperating robots
• distributed sensor suite
• distributed computation
• complete dynamic model
Target Location
+
xd1 J1T
Fr1 xr1
Fr2 xr2
xd2
J2-1 +
M otion/Force Forwa
Controller
Robot(2)
Fd2 Kinem
System behavior J2T +
© X. Lin, FSRL 6
JPL Control Architecture
x
Physical Model of
Sensors Robot(s), Task and
Incomplete
Environment
Sensing
x Incomplete
Knowledge
Problem:
• Incomplete/insufficient knowledge of physical system model
• Insufficient and limited sensory input
• unstructured environment
• uncertainties in task
• sensor occlusion
7
JPL Solution Approach
Model-based Information Theoretic Sensing And Fusion ExploreR (MIT-SAFER)
• Sensor fusion engine → Physical models
• Shared information from robot team member → Plans new sensor poses
Sensor 1
information Direct
(incomplete sensor Control
) data
Physics Physical model
based of robots(s),
sensor task and
Surrogate
fusion model environment
Sensor N sensory
information information
(incomplete
)
System
Multi-robot state Physica Control
multi-sensor input l and
with placement system planning
optimization s algorithm
8
JPL Problem Domain
Description:
• multiple heterogenous robots
• cooperative cliff surface exploration
• inter-system communication/coordination
Current Task
• Environment modeling by RECON-bot
(REmote Cliff Observer and Navigator)
Anchorbot
RECON bot
Anchorbot
Cliffbot
9
JPL Background
• Falls in the category of Simultaneous Localization And Mapping (SLAM)
• Information gathering
• Algorithms for structured environments, simple obstacles
(Asada, Burchka, Kruse, Thrun, Kuipers, Yamauchi, Castellanos, Leonard, Choset, etc.)
• Environment assumed to be planar (easily traversable)
• Sensor movement is sequential or follow topological graphs
• Localization
• Algorithms based on monitoring landmarks and relative motion
(Choset, Kuipers, Tomatis, Victorino, Anousaki, Park, Thrun, etc.)
• Assume landmarks are given (human intervention) or select edges
of the structured environment
10
RECON Environment
JPL Modeling Start
Stop
N
Select new vision system
configuration for RECON robot
11
JPL Step 1: Initialization
• Assume
• RECON-bot motion region as cliff edge plane
• motion region is free/traversable
• Localize system(s)
• External: with respect to a target X
• Internal: with respect to one robot
Y
• Initialize environment model/map Z
• 2.5D elevation grid with associated
measurement uncertainty
• model is considered unknown
• first scan done
12
Step 2: Cliff
JPL Parameterization Start
Stop
13
Step 2: Cliff
JPL Parameterization
Single
Thresholded
Simulated
Primary
Closed
Cliff
Primary
boundary
Cliff
Mars
Boundary
MarsCliff
Closed
Surface
Surface
Loop
50
50
50
100
100
100
150
150
150
200
200
y (cm)
250
250
250
300
300
300
350
350
350
400
400
400
450
450
450
500
500
50
50
50 100
100
100 150
150
150 200
200
200 250
250
250 300
300
300 350
350
350 400
400
400 450
450
450 500
500
500
xx (cm)
(cm)
14
JPL Step 3: Model Building
For each vision system - New goal properties:
• collision free
• reached by collision free path
• not far from current position (d)
• obtain a lot of new information (NI)
Rating Function
( )
RF1 = NI − K ⋅ d ( c , c′) ⋅ (1 − Px , y , z )
n
Rating Function 1:
• updated after every “read”
• large search space
• requires numerical optimization
15
Step 3 (cont.): New
JPL Information
• need to quantify the expected new information from a
given camera pose
Field of View
3-D Depth of
vision Field
system
Obstacle
Viewed region
• information may be interpreted as
Unknown the minimum number of
region
• Properties of H include:
• continuity, symmetry, additivity
• max{H(q1, q2,…, qn)} ⇒ qk=1/n for k=1…n
• A simple example
• Fair coin toss - qT = 0.5 qH = 0.5
H = - (0.5 log20.5 + 0.5 log20.5) = 1 bit
H 18
Step 3 (cont.): New
JPL Information
• This is extended to 2.5-D
n igrid PVi PVi
(
H cam x, y,z,θ p ,θ y ) = ∑ max log 2
PVi PVi
+ 1 − log 2 1 −
i n grid 2 2 2 2
• where
i
ngrid is the number of environment points measured and mapped to cell i
max
ngrid is the maximum allowable mappings to cell i
PVi is the probability of visibility of cell i from the camera test pose
Camx,y,z ε
0.8
rayx,y,z ∆z
0.6
α
0.2
E le vation
-0.2 ∆x ∆y
-0.4
-0.6
-0.8
-1
10 10
9 9
8 8
7 7
6 6
5 5
4 4
3 3
2 2
1 1
X dime ns ion Y dim ens ion
20
Step 3 (cont.): New
JPL Information
• Probability of visibility of a grid cell i is modified
21
Step 3: Motion
JPL Identification
• choose visual markers by evaluating potential markers
- certainty of occupancy
Forstner interest operator
- 2D and 3D contrast
SLOW with a homography transform
F . E . F . = f ( P ( x ) ) + g ( C ( u, v ) ) + h ( H ( x ) )
⇒ Y. Cheng
• relate visual markers and their true locations
u = g 01r
k1u1 k 2u 2 knun r1x r2x rnx
k v y
k2v 2 knvn r1 r2y rny
⇒ 1 1 ⋅ ⋅ ⋅ = g 01 ⋅ ⋅ ⋅
k1f k 2f k nf r z r z rnz
1 2
1 1 1 1 1 1
• solve camera motion (least squares) Spatial point
g 01 = u( r Tr ) r T
−1
k(u,v,f) r
• keep track of uncertainty
z
z
- camera motion uncertainty Target
Camera base y frame
x
- measured point uncertainty frame x f y
- EKF (u,v)
g 01
22
JPLStep 4: Data Reduction
Start Simulated world
Quadtree
map - Top
decomposition
Original
view Elevation
(information
Map
Transmitted
theory)
world map
-200 -200
50
100
0 0
150
World Y dimension
World Y dimension
World Y dimension
100 100
200 200
250
300 300
400 400
350
500 500
400
Base transmission data (BTD) set formed World X dimension World X dimension World X dimension
2
5
Conventional compression
ratio
2
0
Average data
1
0
Stop
5
0
0 5 1
0 1
5 2
0 2
5 3
0 3
5
-100 -100
0 0
World Y dimension
World Y dimension
100 100
200 200
300 300
400 400
500 500
600 600
700 700
100 200 300 400 500 100 200 300 400 500
50
100
0 0
150
World Y dimension
World Y dimension
World Y dimension
100 100
200 200
250
300 300
400 400
350
500 500
400
Base transmission data (BTD) set formed World X dimension World X dimension World X dimension
2
5
Conventional compression
ratio
2
0
Average data
1
0
Stop
5
0
0 5 1
0 1
5 2
0 2
5 3
0 3
5
26
Results-Laboratory
JPL Setup
5500
Max. Info.
5000 + Cliff edge param.
+ Interest function
4500
of points mapped
Max. Info.
cells
4000
+ Cliff edge param.
Number of mapped grid
3500
3000
2000
1500
1000
500
0 10 20 30 40 50 60 70
3000
2500
Number of Mapped Grid Cells
2000
2000
1500
1500
1000
1000
500
500
0 0
0 2 4 6 8 10 12 14 16 18 20 0 2 4 6 8 10 12 14 16 18 20
Top view of mapped region - 20 steps Top view of mapped region - 10 steps
28
JPL Results - Field Setup
1600
1400
Max. info.
mapped
+ Interest function
cells
1200
Max. info.
pointsgrid
ofmapped
1000
800
Number of
Number
600
400
200
1 2 3 4 5 6 7 8 9 10
Number
Number ofofStereo
stereo imaging
Imagingsteps
Steps
29
JPL Results - Field Setup
Max. Info. Max. Info. + Edge Param + Interest Func.
5000 5000
Expected number of newgrid cells Expected number of newgrid cells
Obtained number of newgrid cells Obtained number of newgrid cells
4500 4500
4000 4000
Number of mapped grid cells
3000 3000
2500 2500
2000 2000
1500 1500
1000 1000
500 500
0 0
1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10
Top view of mapped region - 10 steps Top view of mapped region - 10 steps
30
JPL
31
JPL Task/Target Modeling
Inter-system
Problem: How to position the communication
Solution: Independently
- Online task directed optimal camera Independently mobile camera
Mobile vehicles mobile camera (occluded)
placement using optimal information with suspensions
gathering method
- Accounts for object motions and other
robots in environment model
90
80
Secondary
target Moving
70
obstacle
Evaluation of camera pose rating function
World Y
60
Wo rld y d im en s ion
Primary
50
target
40
30 Optimal
camera
Convex
20
position
hulls of
10 occlusions
0
0 10 20 30 40 50 60 70 80 90 100
Wo rld x d imen s ion
World X 34
JPL Results - Simulation
300 tests per scenario Occlusion Density 1 (5%) Occlusion Density 2 (20%) Occlusion density (35%)
• Task: cooperative
Without With Without With Without With
secondary secondary secondary secondary secondary secondary
target target target target target target
Task
difficulty:
Optimal
camera re- guidance of object
easy→20% placement 100 100 76 95 13 25
tolerance Success (%)
Random
to target
camera re-
placement 51 63 18 31 5 10
Success (%)
Random
• % of task success
camera
placement 45 58 16 28 5 9
Task
Success (%)
Optimal
• For low occlusion
difficulty:
medium→
camera re-
placement 99 100 63 86 10 18 density, optimal
10% Success (%)
tolerance Random
camera re-
30 37 11 18 3 6 camera placement
placement
Success (%)
Random results in high task
camera
placement 23 30 8 15 3 4 success
Success (%)
Task Optimal
difficulty:
hard→1%
camera re-
placement 97 99 30 52 3 7 • For high occlusion
tolerance Success (%)
Random
camera re-
placement 3 4 1 2 <<1 1 density it may not
Success (%)
Random be worth doing the
camera
placement 1 1 <<1 <<1 <<1 <<1 task!
Success (%)
35
JPL Results - Experiment
Task: Cooperative insertion of
a component module into a
mating slot
• Simulations: Target was a
single point
• Experiments: Target is a
pose displacement
True viewing
target
OT
RT
Key
fiducials
RV
36
Relating Master-Slave
JPL
• R is vision robot base frame
v
Robots
• R is worker robot base frame
s
z
• RT is worker robot end-effector frame P O2
P O1 OT y
z
RT
• OT is target frame P R2 P O3
x
P R1
y
• Euler angles of rotation in Rv ATS
P R3 x
z
AVT
y
P1 − P2 y Rs
Rz = tan −1 y z
P1 − P2 Slave robot x
y
x x
body frame RV
P1z − P2 z
−1 x
Ry = tan
P12x + P12y − P22x + P22y
P3 − P1y • Required motion for worker robot
Rx = tan −1 y
P1 − P3
z z
Required Translation = PO1 − PR1
R Ox − R xR T
R ⋅ R ⋅ R
x
RT
y
RT PR1
z
RT
A VT = Required Rotation = R O − R R T = R Oy − R yR T
0 1 z z
•Transformation matrices: R
O − R RT
R TS PTS
A TS =
0 1
37
JPL Results - Experimental
Simplifications
Experimental system overview
• Insertion part kept planar
• Optimal re-positioning within
robot kinematic limitations
• Clear circular markers used to
identify object and mating site
Note
• Environment model created as before
markers
38
JPL Results - Experimental
Non-optimal placement - Task Failure
Visual guide till contact
• coupled arm/base motion
• base motion for gross positioning
39
Cooperative Insertion -
JPL Failure
Displacement error vs time Angular error vs time
Angular Camera Placement: Angular error (3% tolerance)
Random Camera Placement: D isplacement error (3% tolerance)
12
0.4 0.4
12
0.3 0.3
10
10 0.2
Desired
0.2
0.1 0.1
8 0
D is p la c e m e nt e rro r (inc he s )
contact
6
6
-0.2
Desired
-0.2
4 -0.3 -0.3
-0.4 -0.4
contact
2
2
-0.5 -0.5
0 -0.6 -0.6
0 2 4 6 8 10 12
0 0 2 4 6 8 10 12
0 2 4 6 8 10 12
time (s)
time (s)
0 2 4 6 8 10 12
T (s) T(s)
0.5 0.5
2.5 2.5
2.0 2
0 0
1.5 1.5
Desired
1.0 1
-0.5 -0.5
0.5 0.5
Fz(lbs)
Fy(lbs)
0
F y (lb s )
F z (lb s )
-1.0
Desired
-1 0
-0.5 -0.5
-1.5 -1.5
-1.0 -1
-1.5 -1.5
-2.0 -2
-2.0 -2
-2.5 -2.5
0 1 2 3 4 5 6 7 8 -2.5 -2.5
0 1 2 3 4 5 6 7 8
0 1 2 3 4 5 6 7 8
time(s) time(s)
0 1 2 3 4 5 6 7 8
T(s) T(s)
40
Cooperative Insertion -
JPL Success
Displacement error vs time Angular error vs time
Optimal Camera Placement: Displacement error (3% tolerance) Optimal Camera Placement: Angular error (3% tolerance)
6
6
0.4 0.4
0.3 0.3
5 5
Desired
0.2 0.2
4 4
0.1 0.1
D is p la c e m e nt e rro r (inc he s )
2
2
Desired
-0.2 -0.2
1 1 contact
-0.3 -0.3
0 0
0 5 10 15 20 25 -0.4 -0.4
0 5 10 15 20 25
0 5 10 15 20 25
time (s) time (s)
0 5 10 15 20 25
T (s) T(s)
0.15 0.15
1.0 0.1
0.08 0.08
0.10 0.1
0.06 0.06
0.05 0.05
0.04 0.04
Fy(lbs)
0 0
0.02 0.02
F o rc e F y (lb s )
F o rc e F z (lb s )
Fz(lbs)
-0.05 -0.05 0 0
-0.02 -0.02
-0.10
-0.1
-0.15 -0.15
-0.06 -0.06
Desired
-0.20 -0.2
0 1 2 3 4 5 6 7 8 9 -0.08 -0.08
0 1 2 3 4 5 6 7 8 9
0 1 2 3 4 5 6 7 8 9
time (s) time (s)
0 1 2 3 4 5 6 7 8 9
T(s) T(s)
41
JPL Summary
42
JPL Final Thoughts
Questions?
Acknowledgements
• MIT/Dubowsky • The sponsor • The Planetary Robotics
JPL
Laboratory Team
43