This document discusses visual simultaneous localization and mapping (VSLAM). It provides an overview of VSLAM, including its applications in robotics and augmented/virtual reality. It also summarizes different VSLAM techniques like sparse and dense approaches. Examples of VSLAM systems for small robots and self-driving cars are described. Finally, it touches on future areas like multi-robot cooperation and semantic VSLAM.
2. Outline
• What is SLAM?
• Application
• Visual SLAM
• Literature survey
• Sparse-Visual-SLAM
• Dense-Visual-SLAM
• System requirement of VSLAM for small system(cleaner robot, Zenbo)
• System requirement for VSLAM for large system (drive-less car)
• Multi-Robot cooperation
• Demo Time
Image source: https://www.slideshare.net/Pmansournia/chadormalu-urban-robot
3. What is SLAM?
• Simultaneous Localization and Mapping
• Sensing, Localization and Mapping
• Generating a map of unknown environment while localizing the mapping
system within that map
Localization
Where am I?
Simultaneously ( based on system requirement, ex: drive-less car less than 5ms)
Mapping
What does the
world look like?
Pose and Map optimization
Loss tracking
4. SLAM is a hard problem
Solve those issues in Simultaneously
Landmark
5. SLAM type
• Probabilistic Way –
• Ex: EKF SLAM (IMU with visual)
• Graph Optimization –
• General graph optimization (visual)
• Graph Optimization with probability
• iSAM (visual) (Dynamic Bayesian Network)
• gtSAM
He is CEO of the Kitty Hawk Corporation, chairman and co-
founder of Udacity. Before that, he was a Google VP and
Fellow, a Professor of Computer Science at Stanford
University, and before that at Carnegie Mellon University.
https://en.wikipedia.org/wiki/Sebastian_Thrun
7. 2007 2011 2013 2014 2015 2017
KinectFusion:
Andrew J. Davison
DTAM:
Andrew J. Davison
Visual SLAM Literature
all citation > 100, except paper in 2017
MonoFusion:
Steven Bathiche
Microsoft
MonoSLAM:
Andrew J. Davison
Imperial College London
PTAM:
David Murray
University of Oxford
SLAM++:
Andrew J. Davison
Kintinuous:
John Leonard
MIT
LSD-SLAM:
Daniel Cremers
TUM
RGBDSLAM-v2:
Wolfram Burgard
University of Freiburg
RTAP-MAP:
Franc¸ois Michaud
Universityde Sherbrooke
ORBSLAM:
Juan D. Tard´os
Universidad de
Zaragoza
InifiniteTAM:
David Murray
University of Oxford
ElasticFusion:
Andrew J. Davison
SVO:
Davide Scaramuzza
University of Zurich
BundleFusion:
MATTHIAS NIESSNER
Stanford University
RGBDTAM:
Javier Civera
Universidad de Zaragoza
12. Small SLAM System for Robot (Help each other?)
• Roomba 980 released
• First product with vision based
mapping
• LPC3250 Processor from NXP
• ARM9 SoC
• 2 M byte FLASH
• 16 M byte SDRAM
• WiFi Connected via separate module
• OS: Android
• 4GB Memory
• Storage 128G
• CPU: Intel Atom
• 3D camera, Intel Realsense
• 13M color camera
• Wifi
• Bluetooth
iRobot 980 Zenbo
13. Large SLAM System for Self-driving Shuttle
7starlake
Safety is the first priority.
http://7starlake.com/
14. MultiRobot – Cooperation?
• 1. Alexa (Amazon echo) is the home center manager
• 2. One Zenbo to know the environment map (SLAM)
• 3. One Zenbo to detect the people behavior
Scenario
15. SLAM Future Issue
• Dynamic Scene or Object
• Multi-Robot
• Semantic
• Low weight
• Mobile
Next Step
• Applied feasible SLAM (Sparse SLAM, ex::ORB-SLAM) to Zenbo