Вы находитесь на странице: 1из 22

Image-Based Models with Applications in Robot Navigation

Dana Cobzas
Supervisor: Hong Zhang

Dana Cobzas-PhD thesis

3D Modeling in Computer Graphics


Aquisition Rendering

Range sensors Modelers Real scene Geometric model + texture New view
[Pollefeys & van Gool]

Graphics model: 3D detailed geometric model of a scene


Goal: rendering new views
Dana Cobzas-PhD thesis

Mapping in Mobile Robotics


Map Building
sensors

Navigation environment

Localization/ Tracking
Robot

Map

Navigation map: representation of the navigation space Goal: tracking/localizing the robot

Dana Cobzas-PhD thesis

Same objective: How to model existing scenes?


Traditional geometry-based approaches:
= geometric model + surface model + light model - Modeling complex real scenes is slow - Achieving photorealism is difficult - Rendering cost is related to scene complexity + Easy to combine with traditional graphics

Alternative approach: image-based modeling:


= non-geometric model from images
- Difficult to acquire real scenes - Difficult to integrate with traditional graphics + Achieving photo-realism is easier if starting from real photos + Rendering cost is independent on scene complexity
In this work we combine the advantages of both for mobile robotics localization and predictive display
Dana Cobzas-PhD thesis

This thesis
Investigates the applicability of IBMR techniques in mobile robotics.

Questions addressed:
Is it possible to use an IBM as navigation map for mobile robotics? Do they provide desired accuracy for the specific applications localization and tracking? What advantages do they offer compared to traditional geometric-based models?

Dana Cobzas-PhD thesis

Approach
Solution:
Reconstructed geometric model combined with image information 2 models Model1: calibrated: panorama with depth Model2: uncalibrated: geometric model with dynamic texture Applications in localization/tracking and predictive display

Dana Cobzas-PhD thesis

Model1: Panoramic model

Dana Cobzas-PhD thesis

Model1: Overview

Standard panorama: - no parallax, reprojection from the same viewpoint Solution adding depth/disparity information:
1. Using two panoramic images for stereo 2. Depth from standard planar image stereo 3. Depth from laser range-finder

Dana Cobzas-PhD thesis

Depth from stereo


Trinocular Vision System
(Point Gray Research)

Cylindrical image-based panoramic models

+ depth map

Dana Cobzas-PhD thesis

Depth from laser range-finder


CCD camera Laser rangefinder Pan unit 180 degrees panoramic mosaic

Corresponding range data (spherical representation)

Dana Cobzas-PhD thesis

Data from different sensors: requires data registration

Model 1: Applications
Absolute localization:
Input: image+depth Features: planar patches vertical lines

Incremental localization:

Input: intensity image Assumes: approximate pose Features: vertical lines

Predictive display:

Dana Cobzas-PhD thesis

Model 2: Geometric model with dynamic texture

Dana Cobzas-PhD thesis

Model 2: Overview
Tracking Geometric model

Dynamic texture

Rendering

Input images

Model

Applications

Dana Cobzas-PhD thesis

Geometric structure
Tracked features
poses structure

Structure from motion algorithm

Dana Cobzas-PhD thesis

Dynamic texture
Input Images Re-projected geometry Texture

Variability basis

I1

It

Dana Cobzas-PhD thesis

3D SSD Tracking
Goal: determine camera motion (rot+transl) from image differences Assumes: sparse geometric model of the scene Features: planar patches
3D Model

differential motion initial

motion

past motion

current motion

past warp current warp differential warp

Dana Cobzas-PhD thesis

Tracking example

Dana Cobzas-PhD thesis

Tracking and predictive display


Goal: track robot 3D pose along a trajectory Input: geometric model (acquired from images) and initial pose Features: planar patches

Dana Cobzas-PhD thesis

Thesis contributions
Contrast calibrated and uncalibrated methods for capturing scene geometry and appearance from images:
panoramic model with depth data (calibrated) geometric model with dynamic texture (uncalibrated)

Demonstrate the use of the models as navigation maps with applications in mobile robotics
absolute localization incremental localization model-based tracking predictive display

Dana Cobzas-PhD thesis

Thesis questions
Is it possible to use an image-based model as navigation map for mobile robotics?
A combination of geometric and image-based model can be used as navigation map.

Do they provide desired accuracy for the specific applications localization, tracking?
The geometric model (reconstructed from images) is used for localization/tracking algorithms. The accuracy of the algorithm depends on the accuracy of the reconstructed model. The model accuracy can also be improved during navigation as different levels of accuracy are needed depending on the location (large space/narrow space) future work.

What advantages do they offer compared to traditional geometric based models?


The image information is used to solve data association problem. Model renderings are used for predicting robot location for a remote user.
Dana Cobzas-PhD thesis

Comparison with current approaches


Mobile Robotics Map
+ Image information for data association + Complete model that can be rendered closer to human perception - Concurrent localization and matching (SLAM-Durrant-Whyte) - Invariant features (light, occlusion) (SIFT-Lowe) - Uncertainty in feature location (localization algorithms)

Graphics Model (dynamic texture model-hybrid image+geometric model)


+ Easy acquisition: non-calibrated camera (raysets, geometric models) + Photorealism (geometric models) + Traditional rendering using the geometric model (raysets) - Automatic feature detection for tracking larger scenes - Denser geometric model (relief texture) - Light-invariance (geometric models, photogrammetry)
Dana Cobzas-PhD thesis

Future work
Mobile Robotics Map
Improve map during navigation Different map resolutions depending on robot pose Incorporate uncertainty in robot pose and features Light, occlusion invariant features Predictive display: control robots motion by pointing o dragging in image space

Graphics Model (dynamic texture)


Automatic feature detection for tracking Light-invariant model Compose multiple models into a scene based on intuitive geometric constraints Detailed geometry (range information from images)

Dana Cobzas-PhD thesis

Вам также может понравиться