Вы находитесь на странице: 1из 43

JPL

A Multi Agent Visual


Exploration Architecture of
Cliff Surfaces

Vivek A. Sujan
JPL Objective

To develop sensing and estimation algorithms


for multiple cooperating mobile robots to
work in highly unstructured field
environments.

2
JPL Motivation
A new generation of worker robots is required for:
• Exploration and development of space
• Mining and other underground operations
• Environment restoration
• Assisting and supporting humans
• Military applications
• handling hazardous waste
• moving large weapons

3
JPL Challenge
Currently Robots work as individuals
- fixed bases, factory settings
- unlimited sensory data
- simple tasks

Future Robots need to work as teams


- perform complex tasks
- autonomous operation
Anchorbot
RECON bot
Anchorbot

Cliffbot

Technical Problems
• system physical interactions
• complex terrain & unstructured environment
• limited sensing (due to occlusions, etc.)
4
JPL In an Ideal World
Cooperative field task:
• multiple cooperating robots
• distributed sensor suite
• distributed computation
• complete dynamic model

Location of target at all times Desired arm motions/torques

vehicle dynamic params.


Dynamic model of mobile
robots and arms
Geometric world map - assembly operation Disturbance compensation
- unstructured environment
Payload properties - limited sensing
Object Contact state
5
JPL Ideal World - Task
Controller
J1-1 +
Physical model Fd1 M otion/Force
Controller
Robot(1)
Forwa
Kinem

Target Location
+
xd1 J1T
Fr1 xr1

Fr2 xr2
xd2
J2-1 +
M otion/Force Forwa
Controller
Robot(2)
Fd2 Kinem
System behavior J2T +

© X. Lin, FSRL 6
JPL Control Architecture

x
Physical Model of
Sensors Robot(s), Task and
Incomplete
Environment
Sensing

x Incomplete
Knowledge

Control and © M. Lichter, FSRL


Physical System Planning
Algorithm

Problem:
• Incomplete/insufficient knowledge of physical system model
• Insufficient and limited sensory input
• unstructured environment
• uncertainties in task
• sensor occlusion
7
JPL Solution Approach
Model-based Information Theoretic Sensing And Fusion ExploreR (MIT-SAFER)
• Sensor fusion engine → Physical models
• Shared information from robot team member → Plans new sensor poses

Sensor 1
information Direct
(incomplete sensor Control
) data
Physics Physical model
based of robots(s),
sensor task and
Surrogate
fusion model environment
Sensor N sensory
information information
(incomplete
)
System
Multi-robot state Physica Control
multi-sensor input l and
with placement system planning
optimization s algorithm
8
JPL Problem Domain
Description:
• multiple heterogenous robots
• cooperative cliff surface exploration
• inter-system communication/coordination
Current Task
• Environment modeling by RECON-bot
(REmote Cliff Observer and Navigator)
Anchorbot
RECON bot
Anchorbot

Cliffbot

9
JPL Background
• Falls in the category of Simultaneous Localization And Mapping (SLAM)

• Information gathering
• Algorithms for structured environments, simple obstacles
(Asada, Burchka, Kruse, Thrun, Kuipers, Yamauchi, Castellanos, Leonard, Choset, etc.)
• Environment assumed to be planar (easily traversable)
• Sensor movement is sequential or follow topological graphs

• Localization
• Algorithms based on monitoring landmarks and relative motion
(Choset, Kuipers, Tomatis, Victorino, Anousaki, Park, Thrun, etc.)
• Assume landmarks are given (human intervention) or select edges
of the structured environment

• Not efficient in practice and limited to non-cooperative systems

10
RECON Environment
JPL Modeling Start

Initialize RECON robot system


Main steps:
• Step 1: initialization Parameterize cliff edge

• Step 2: cliff edge identification


• Step 3: information based modeling End criteria: Y
• Step 4: data transmission Is expanse and resolution
sufficient?
Transmit data to Cliffbot

Stop
N
Select new vision system
configuration for RECON robot

Move system into desired state

Acquire and merge new data

11
JPL Step 1: Initialization
• Assume
• RECON-bot motion region as cliff edge plane
• motion region is free/traversable

• Localize system(s)
• External: with respect to a target X
• Internal: with respect to one robot
Y
• Initialize environment model/map Z
• 2.5D elevation grid with associated
measurement uncertainty
• model is considered unknown
• first scan done

12
Step 2: Cliff
JPL Parameterization Start

X Threshold bound Z data

Region growing to obtain main rover plateau


Y

Close main plateau


(a) binary dilate
(b) binary erode
orary completion of environment model
Select cliff edge (plateau boundary) pixels

Edge following to form


single closed loop of boundary pixels

Best polygon fit

Stop

13
Step 2: Cliff
JPL Parameterization
Single
Thresholded
Simulated
Primary
Closed
Cliff
Primary
boundary
Cliff
Mars
Boundary
MarsCliff
Closed
Surface
Surface
Loop

50
50
50

100
100
100

150
150
150

200
200
y (cm)

250
250
250

300
300
300

350
350
350

400
400
400

450
450
450

500
500
50
50
50 100
100
100 150
150
150 200
200
200 250
250
250 300
300
300 350
350
350 400
400
400 450
450
450 500
500
500

xx (cm)
(cm)

14
JPL Step 3: Model Building
For each vision system - New goal properties:
• collision free
• reached by collision free path
• not far from current position (d)
• obtain a lot of new information (NI)

Rating Function

( )
RF1 = NI − K ⋅ d ( c , c′) ⋅ (1 − Px , y , z )
n

Rating Function 1:
• updated after every “read”
• large search space
• requires numerical optimization

15
Step 3 (cont.): New
JPL Information
• need to quantify the expected new information from a
given camera pose

Field of View

3-D Depth of
vision Field
system

Obstacle

Viewed region
• information may be interpreted as
Unknown the minimum number of
region

states (bits, digits, etc.) needed to describe a piece of data


- many different definitions/measures proposed (Hartley, Feinstein, Fisher, Shannon)
- philosophy, cognitive science, communication systems, etc
16
Step 3 (cont.): New
JPL Information
• Shannon proposed the following measure for information
(given the probability of occurrence for the kth event)
n
H ( q1 , q2 ,..., qn ) = − ∑ qk log2 qk
k =1

• Properties of H include:
• continuity, symmetry, additivity
• max{H(q1, q2,…, qn)} ⇒ qk=1/n for k=1…n

• A simple example
• Fair coin toss - qT = 0.5 qH = 0.5
H = - (0.5 log20.5 + 0.5 log20.5) = 1 bit

• Unfair coin toss - qT = 0.99 qH = 0.01


H = 0.081 bit

• Unfair coin toss - qT = 1 qH = 0


17
Step 3 (cont.): New
JPL Information
• Shannon’s emphasis was information content of 1-D signals
• 2-D gray level signals (images) define

qk = f k / N for k = 1...N gray

where fk is number of pixels with gray level k


N is the total number of pixels
Ngray is the number of gray levels (e.g. 256)

H 18
Step 3 (cont.): New
JPL Information
• This is extended to 2.5-D
n igrid  PVi  PVi  
(
H cam x, y,z,θ p ,θ y ) = ∑ max  log 2
PVi   PVi
 + 1 − log 2 1 −  
 
i n grid   2 2   2  2  

• where
i
ngrid is the number of environment points measured and mapped to cell i
max
ngrid is the maximum allowable mappings to cell i
PVi is the probability of visibility of cell i from the camera test pose

• This measures the sum of the information expected from each


grid cell in the FOV

• Also extended to 3-D in previous work, which measured the


information expected in a 3-D image formed in the FOV
19
Step 3 (cont.): New
JPL Information
• Probability of visibility of a grid cell i is obtained:
( ray z −Ob z )
 1  z2  
P = ∏ sgn ( ray z − Ob z ) ⋅ ∫
V
i
exp − dz + 0.5
∆x  0 σ z 2π  2σ z  

i.e. product of probabilities of ray from i passing


through intervening cells
Camera
test location
1

Camx,y,z ε
0.8
rayx,y,z ∆z
0.6

Obx,y,z σx,y,z Ptx,y,z


0.4

α
0.2
E le vation

-0.2 ∆x ∆y
-0.4

-0.6

-0.8

-1
10 10
9 9
8 8
7 7
6 6
5 5
4 4
3 3
2 2
1 1
X dime ns ion Y dim ens ion

20
Step 3 (cont.): New
JPL Information
• Probability of visibility of a grid cell i is modified

• Pre-multiplication with an Interest Function (I.F.)


I.F.i0 = 1
1
I.F. =
k
i βPVi
⋅ I.F.ik -1
e

given the kth unsuccessful measurement


where b is scaling constant

• Reflects the data quality in the region (due to sensor limitation)

21
Step 3: Motion
JPL Identification
• choose visual markers by evaluating potential markers
- certainty of occupancy
Forstner interest operator
- 2D and 3D contrast
SLOW with a homography transform
F . E . F . = f ( P ( x ) ) + g ( C ( u, v ) ) + h ( H ( x ) )
⇒ Y. Cheng
• relate visual markers and their true locations
u = g 01r
k1u1 k 2u 2 knun  r1x r2x rnx 
k v  y 
k2v 2 knvn   r1 r2y rny 
⇒ 1 1 ⋅ ⋅ ⋅  = g 01 ⋅ ⋅ ⋅
k1f k 2f k nf  r z r z rnz 
  1 2

1 1 1  1 1 1 
• solve camera motion (least squares) Spatial point

g 01 = u( r Tr ) r T
−1

k(u,v,f) r
• keep track of uncertainty
z
z
- camera motion uncertainty Target
Camera base y frame
x
- measured point uncertainty frame x f y
- EKF (u,v)
g 01
22
JPLStep 4: Data Reduction
Start Simulated world
Quadtree
map - Top
decomposition
Original
view Elevation
(information
Map
Transmitted
theory)
world map
-200 -200

50

Convolve with low pass filter


-100 -100

100
0 0

150

World Y dimension
World Y dimension

World Y dimension
100 100

Adaptive decimation 200

200 200
250

300 300

Lossless data compression


300

400 400
350

500 500
400

Quadtree decomposition 600


450
600

700 500 700

100 0200 300 100400 500 200 300 100


400 200 300
500
500 400 500

Base transmission data (BTD) set formed World X dimension World X dimension World X dimension

(a) coordinates of quadtree nodes 3


0
(b) value of quadtree node = avg(quad value)
reduction ratio

2
5

Conventional compression
ratio

2
0
Average data

Transmit data to Cliffbot 1


5
Reduction

1
0

Stop
5

0
0 5 1
0 1
5 2
0 2
5 3
0 3
5

Terrain dH (units = rover clearance height)


T
erra
indh(u
nits=ro
verc
lea
ra
nceh
eig
ht)
23
JPLStep 4: Data Reduction
Simulated world map - Top view Transmitted world map
-200 -200

-100 -100

0 0
World Y dimension

World Y dimension
100 100

200 200

300 300

400 400

500 500

600 600

700 700

100 200 300 400 500 100 200 300 400 500

World X dimension World X dimension


24
JPLStep 4: Data Reduction
Start Simulated world
Quadtree
map - Top
decomposition
Original
view Elevation
(information
Map
Transmitted
theory)
world map
-200 -200

50

Convolve with low pass filter


-100 -100

100
0 0

150

World Y dimension
World Y dimension

World Y dimension
100 100

Adaptive decimation 200

200 200
250

300 300

Lossless data compression


300

400 400
350

500 500
400

Quadtree decomposition 600


450
600

700 500 700

100 0200 300 100400 500 200 300 100


400 200 300
500
500 400 500

Base transmission data (BTD) set formed World X dimension World X dimension World X dimension

(a) coordinates of quadtree nodes 3


0
(b) value of quadtree node = avg(quad value)
reduction ratio

2
5

Conventional compression
ratio

2
0
Average data

Transmit data to Cliffbot 1


5
Reduction

1
0

Stop
5

0
0 5 1
0 1
5 2
0 2
5 3
0 3
5

Terrain dH (units = rover clearance height)


T
erra
indh(u
nits=ro
verc
lea
ra
nceh
eig
ht)
25
Experimental System
JPL Setup

26
Results-Laboratory
JPL Setup
5500

Max. Info.
5000 + Cliff edge param.
+ Interest function
4500
of points mapped

Max. Info.
cells

4000
+ Cliff edge param.
Number of mapped grid

3500

3000

Max. Info. Raster mapping Raster mapping


2500
w/ yaw w/o yaw
Number

2000

1500

1000

500
0 10 20 30 40 50 60 70

Number of stereo imaging steps


Number of Stereo Imaging Steps
27
Results - Laboratory
JPL Setup
Max. Info. Max. Info. + Edge Param + Interest Func.
Maximum Information Maximum Information + Cliff Edge Parameterization + Interest Function
3500 3000
E xpec ted num ber of new m apped c ells E x pec ted num ber of new m apped cells
Obtained num ber of new m apped cells O btained num ber of new m apped c ells

3000
2500
Number of Mapped Grid Cells

Number of mapped grid cells


2500

2000

2000

1500

1500

1000
1000

500
500

0 0
0 2 4 6 8 10 12 14 16 18 20 0 2 4 6 8 10 12 14 16 18 20

Number of Stereo Imaging Steps Number of stereo imaging steps

Top view of mapped region - 20 steps Top view of mapped region - 10 steps
28
JPL Results - Field Setup
1600

1400
Max. info.
mapped

+ Interest function
cells

1200

Max. info.
pointsgrid
ofmapped

1000

800
Number of
Number

600

400

200
1 2 3 4 5 6 7 8 9 10

Number
Number ofofStereo
stereo imaging
Imagingsteps
Steps
29
JPL Results - Field Setup
Max. Info. Max. Info. + Edge Param + Interest Func.
5000 5000
Expected number of newgrid cells Expected number of newgrid cells
Obtained number of newgrid cells Obtained number of newgrid cells
4500 4500

4000 4000
Number of mapped grid cells

Number of mapped grid cells


3500 3500

3000 3000

2500 2500

2000 2000

1500 1500

1000 1000

500 500

0 0
1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10

Number of stereo imaging steps Number of stereo imaging steps

Top view of mapped region - 10 steps Top view of mapped region - 10 steps
30
JPL

Food for thought

31
JPL Task/Target Modeling
Inter-system
Problem: How to position the communication

“eyes” in an effective way to


carry out a vision guided task?

Solution: Independently
- Online task directed optimal camera Independently mobile camera
Mobile vehicles mobile camera (occluded)
placement using optimal information with suspensions
gathering method
- Accounts for object motions and other
robots in environment model

3D world geometric map

Rating function definition Optimal Camera Placement


- w.r.t. target
Target location - kinematic constraints
Camera properties

Optimum Rating function


32
JPL Camera Pose Selection
Task directed optimal placement is dependent on:
• depth of field Target angular visibility
• resolution
2d tan ( α 2 ) 1
R= and Res RF =
n R

• target angular visibility


 π −π
cos β for
2
≥β ≥
2 Target field visibility
TAVRF = 
0 π −π
 for β ≥ or β ≤
2 2

• target field visibility


• alternate/secondary targets
Rating Function
α
RF2 ( x, y , z ) = Κ ⋅ DOFRF β
⋅ Res RF γ
⋅ TFVRF δ
⋅ TAVRF ⋅ (1 − Px , y , z )
33
JPL Results - Simulation
Camera motion with moving obstacle
Single robot
Task: To monitor a target with
sufficient accuracy
Simulated planar environment
Simula ted World
100

90

80
Secondary
target Moving
70
obstacle
Evaluation of camera pose rating function
World Y

60
Wo rld y d im en s ion

Primary
50
target

40

30 Optimal
camera
Convex
20
position
hulls of
10 occlusions

0
0 10 20 30 40 50 60 70 80 90 100
Wo rld x d imen s ion

World X 34
JPL Results - Simulation
300 tests per scenario Occlusion Density 1 (5%) Occlusion Density 2 (20%) Occlusion density (35%)

• Task: cooperative
Without With Without With Without With
secondary secondary secondary secondary secondary secondary
target target target target target target
Task
difficulty:
Optimal
camera re- guidance of object
easy→20% placement 100 100 76 95 13 25
tolerance Success (%)
Random
to target
camera re-
placement 51 63 18 31 5 10
Success (%)
Random
• % of task success
camera
placement 45 58 16 28 5 9
Task
Success (%)
Optimal
• For low occlusion
difficulty:
medium→
camera re-
placement 99 100 63 86 10 18 density, optimal
10% Success (%)
tolerance Random
camera re-
30 37 11 18 3 6 camera placement
placement
Success (%)
Random results in high task
camera
placement 23 30 8 15 3 4 success
Success (%)
Task Optimal
difficulty:
hard→1%
camera re-
placement 97 99 30 52 3 7 • For high occlusion
tolerance Success (%)
Random
camera re-
placement 3 4 1 2 <<1 1 density it may not
Success (%)
Random be worth doing the
camera
placement 1 1 <<1 <<1 <<1 <<1 task!
Success (%)
35
JPL Results - Experiment
Task: Cooperative insertion of
a component module into a
mating slot
• Simulations: Target was a
single point
• Experiments: Target is a
pose displacement
True viewing
target

OT
RT

Key
fiducials

RV
36
Relating Master-Slave
JPL
• R is vision robot base frame
v

Robots
• R is worker robot base frame
s
z
• RT is worker robot end-effector frame P O2
P O1 OT y
z
RT
• OT is target frame P R2 P O3
x
P R1
y
• Euler angles of rotation in Rv ATS
P R3 x
z
AVT
y
 P1 − P2 y  Rs
Rz = tan −1  y  z
 P1 − P2  Slave robot x
y
 x x 
body frame RV
 P1z − P2 z 
−1   x
Ry = tan  
 P12x + P12y − P22x + P22y
 
 P3 − P1y  • Required motion for worker robot
Rx = tan −1  y 
 P1 − P3 
 z z 
Required Translation = PO1 − PR1
 R Ox − R xR T 
R ⋅ R ⋅ R
x
RT
y
RT PR1 
z
RT
 
A VT =   Required Rotation = R O − R R T =  R Oy − R yR T 
 0 1   z z 
•Transformation matrices: R
 O − R RT 
R TS PTS 
A TS =  
 0 1 
37
JPL Results - Experimental
Simplifications
Experimental system overview
• Insertion part kept planar
• Optimal re-positioning within
robot kinematic limitations
• Clear circular markers used to
identify object and mating site
Note
• Environment model created as before

markers

38
JPL Results - Experimental
Non-optimal placement - Task Failure
Visual guide till contact
• coupled arm/base motion
• base motion for gross positioning

Force feedback insertion


(motion in direction of least resistance
• only arm motion

Optimal placement - Task success

Task success with varying difficulty


Optimal camera Random
placement placement
Easy 10/10 7/10
(insert site tolerance: 18%)
Medium 10/10 4/10
(insert site tolerance: 6%)
Hard 9/10 2/10
(insert site tolerance: 3%)

39
Cooperative Insertion -
JPL Failure
Displacement error vs time Angular error vs time
Angular Camera Placement: Angular error (3% tolerance)
Random Camera Placement: D isplacement error (3% tolerance)
12
0.4 0.4

12
0.3 0.3

10

10 0.2
Desired
0.2

Ang. error (rads)


Disp. error (inches)

0.1 0.1

8 0
D is p la c e m e nt e rro r (inc he s )

A ng ula r e rro r (ra d ia ns )


-0.1 -0.1

contact
6

6
-0.2
Desired
-0.2

4 -0.3 -0.3

-0.4 -0.4

contact
2

2
-0.5 -0.5

0 -0.6 -0.6
0 2 4 6 8 10 12

0 0 2 4 6 8 10 12

0 2 4 6 8 10 12
time (s)
time (s)

0 2 4 6 8 10 12
T (s) T(s)

Force Fy vs time (after contact) Force Fz vs time (after contact)


Random camera placement: Insertion force (3% tolerance) Random camera placement: Insertion force (3% tolerance)

0.5 0.5
2.5 2.5

2.0 2

0 0

1.5 1.5
Desired
1.0 1

-0.5 -0.5

0.5 0.5
Fz(lbs)
Fy(lbs)

0
F y (lb s )

F z (lb s )

-1.0
Desired
-1 0

-0.5 -0.5

-1.5 -1.5

-1.0 -1

-1.5 -1.5

-2.0 -2

-2.0 -2

-2.5 -2.5
0 1 2 3 4 5 6 7 8 -2.5 -2.5
0 1 2 3 4 5 6 7 8

0 1 2 3 4 5 6 7 8
time(s) time(s)
0 1 2 3 4 5 6 7 8
T(s) T(s)
40
Cooperative Insertion -
JPL Success
Displacement error vs time Angular error vs time
Optimal Camera Placement: Displacement error (3% tolerance) Optimal Camera Placement: Angular error (3% tolerance)

6
6
0.4 0.4

0.3 0.3

5 5

Desired

Ang. error (rads)


Disp. error (inches)

0.2 0.2

4 4

0.1 0.1
D is p la c e m e nt e rro r (inc he s )

A ng ula r e rro r (ra d ia ns )


3
3
0 0

contact -0.1 -0.1

2
2

Desired
-0.2 -0.2

1 1 contact
-0.3 -0.3

0 0
0 5 10 15 20 25 -0.4 -0.4
0 5 10 15 20 25

0 5 10 15 20 25
time (s) time (s)

0 5 10 15 20 25

T (s) T(s)

Force Fy vs time (after contact) Force Fz vs time (after contact)


Optimal Camera Placement: Insertion force (3% tolerance) Optimal Camera Placement: Insertion force (3% tolerance)

0.15 0.15
1.0 0.1

0.08 0.08

0.10 0.1

0.06 0.06

0.05 0.05

0.04 0.04
Fy(lbs)

0 0

0.02 0.02
F o rc e F y (lb s )

F o rc e F z (lb s )
Fz(lbs)

-0.05 -0.05 0 0

-0.02 -0.02

-0.10
-0.1

Desired -0.04 -0.04

-0.15 -0.15

-0.06 -0.06

Desired
-0.20 -0.2
0 1 2 3 4 5 6 7 8 9 -0.08 -0.08
0 1 2 3 4 5 6 7 8 9

0 1 2 3 4 5 6 7 8 9
time (s) time (s)
0 1 2 3 4 5 6 7 8 9
T(s) T(s)
41
JPL Summary

• It is difficult or impossible to directly measure key information


required for control of cooperative field robots
• Objective of this research was to develop algorithms to
compensate for such sensor limitations
• The approach was to use optimal information gathering
methods from distributed resources
• Applied toward the modeling of the robots’ environment and
task

42
JPL Final Thoughts
Questions?

Acknowledgements
• MIT/Dubowsky • The sponsor • The Planetary Robotics

JPL
Laboratory Team

43

Вам также может понравиться