SlideShare a Scribd company logo
1 of 96
Visual odometry



                  by Inkyu Sa
Motivation
Motivation
       his laser scanner is good enough
     to obtain the position (x, y, θ, z) of
     the quadrotor at 10Hz. This data
     provides from ROS canonical scan
     matcher package.
                        0.5




                        0.4




                        0.3




                        0.2




                        0.1
       y position(m)




                         0




                       −0.1




                       −0.2




                       −0.3




                       −0.4




                       −0.5
                         −0.5   −0.4   −0.3   −0.2   −0.1        0          0.1   0.2   0.3   0.4   0.5
                                                            x position(m)
Motivation
                                 his laser scanner is good enough
                               to obtain the position (x, y, θ, z) of
                               the quadrotor at 10Hz. This data
                               provides from ROS canonical scan
                               matcher package.
                                                  0.5




                                                  0.4




                                                  0.3




                                                  0.2




- Relatively high accuracy.                       0.1
                                 y position(m)




- ROS device driver support.                       0




                                                 −0.1




                                                 −0.2




                                                 −0.3




                                                 −0.4




                                                 −0.5
                                                   −0.5   −0.4   −0.3   −0.2   −0.1        0          0.1   0.2   0.3   0.4   0.5
                                                                                      x position(m)
Motivation
                                 his laser scanner is good enough
                               to obtain the position (x, y, θ, z) of
                               the quadrotor at 10Hz. This data
                               provides from ROS canonical scan
                               matcher package.
                                                  0.5




                                                  0.4




                                                  0.3




                                                  0.2




- Relatively high accuracy.                       0.1
                                 y position(m)




- ROS device driver support.                       0




                                                 −0.1




                                                 −0.2



- Expensive, USD 2375                            −0.3



- Low frequency 10Hz                             −0.4


- Only for 2D.                                   −0.5
                                                   −0.5   −0.4   −0.3   −0.2   −0.1        0          0.1   0.2   0.3   0.4   0.5
                                                                                      x position(m)
Motivation


http://www.ifixit.com
Motivation
                              inect 3D depth camera can
                           provide not only 2D RGB images but
                           3D depth images at 30Hz.


http://www.ifixit.com
Motivation
                                                inect 3D depth camera can
                                             provide not only 2D RGB images but
                                             3D depth images at 30Hz.


     http://www.ifixit.com


- Reasonable price. AUD 180.
- 3 Dimensional information.
- Openni Kinect ROS device driver and
point could library support.
- Available to use for visual odometry and
object recognition, 3D SLAM and so on.
Motivation
                                             inect 3D depth camera can
                                          provide not only 2D RGB images but
                                          3D depth images at 30Hz.


     http://www.ifixit.com
                                           - Relatively low accuracy and many noise.
- Reasonable price. AUD 180.
                                           - Heavy weight. original kinect over 500g.
- 3 Dimensional information.
- Openni Kinect ROS device driver and      - Requires high computational power.
point could library support.
                                                                       ◦    ◦
                                           - Narrow filed of view. H=57,V=43
- Available to use for visual odometry and
object recognition, 3D SLAM and so on.
Motivation
                                             inect 3D depth camera can
                                          provide not only 2D RGB images but
                                          3D depth images at 30Hz.


     http://www.ifixit.com
                                           - Relatively low accuracy and many noise.
- Reasonable price. AUD 180.
                                           - Heavy weight. original kinect over 500g.
- 3 Dimensional information.
- Openni Kinect ROS device driver and      - Requires high computational power.
point could library support.
                                                                       ◦    ◦
                                           - Narrow filed of view. H=57,V=43
- Available to use for visual odometry and
object recognition, 3D SLAM and so on.
Motivation
                                             inect 3D depth camera can
                                          provide not only 2D RGB images but
                                          3D depth images at 30Hz.


     http://www.ifixit.com
                                           - Relatively low accuracy and many noise.
- Reasonable price. AUD 180.
                                           - Heavy weight. original kinect over 500g.
- 3 Dimensional information.
- Openni Kinect ROS device driver and      - Requires high computational power.
point could library support.
                                                                       ◦    ◦
                                           - Narrow filed of view. H=57,V=43
- Available to use for visual odometry and
object recognition, 3D SLAM and so on.
Motivation
                                             inect 3D depth camera can
                                          provide not only 2D RGB images but
                                          3D depth images at 30Hz.


     http://www.ifixit.com
                                           - Relatively low accuracy and many noise.
- Reasonable price. AUD 180.
                                           - Heavy weight. original kinect over 500g.
- 3 Dimensional information.
- Openni Kinect ROS device driver and      - Requires high computational power.
point could library support.
                                                                       ◦    ◦
                                           - Narrow filed of view. H=57,V=43
- Available to use for visual odometry and
object recognition, 3D SLAM and so on.
Motivation
                                             inect 3D depth camera can
                                          provide not only 2D RGB images but
                                          3D depth images at 30Hz.


     http://www.ifixit.com
                                           - Relatively low accuracy and many noise.
- Reasonable price. AUD 180.
                                           - Heavy weight. original kinect over 500g.
- 3 Dimensional information.
- Openni Kinect ROS device driver and      - Requires high computational power.
point could library support.
                                                                       ◦    ◦
                                           - Narrow filed of view. H=57,V=43
- Available to use for visual odometry and
object recognition, 3D SLAM and so on.
Contents
Contents
◦
◦       ◦
◦
◦       ◦
◦
◦       ◦
                    
   x               a
  y =           b 
   z               1

a = tan{α tan−1 u/f } cos β
b = tan{α tan−1 v/f } sin β
u =x point of image plane.
v =y point of image plane.
(∆x, ∆y, ∆θ)

         (x , y )
                                          (u, v) (u , v )
(x, y)
                      ˆ ˆ
                    (du, dv) = P (u, v, {u0 , v0 , f, α}, {∆x, ∆y, ∆θ})

                    P is optical flow function of
                       the feature coordinate.
   t     t+1
e1 = med           ˆ                 ˆ
           (dui − dui )2 ) + (dvi − dvi )2 )




e1
Solar powered robot, Hyperion,
developed by CMU.
Solar powered robot, Hyperion,
developed by CMU.

The parameter estimates are
somewhat noisy but closely with
those determined using a CMU
calibration method.




                          estimates=(Value)
                 Calibration method=(True)
R                   W
      x
      ˙                   x
                          ˙
    R      = RZ (θ)     W
      y
      ˙                   y
                          ˙


Then integration of the robot
velocity using sample time
can be produce the position
of the robot as shown the
left image.
     R            R
       x            x
                    ˙
     R     =      R      ∆t
       y            y
                    ˙
Using the following equation,
the observed robot coordinate
velocity can be calculated.
    R                   W
      x
      ˙                   x
                          ˙
    R      = RZ (θ)     W
      y
      ˙                   y
                          ˙


Then integration of the robot
velocity using sample time
can be produce the position
of the robot as shown the
left image.
     R            R
       x            x
                    ˙
     R     =      R      ∆t
       y            y
                    ˙
6DOF of camera position + 3DOF
of features position.
    Observation vector,the projection
data for the current image.
      Process noise covariance,should
be known.
      Measurement noise covariance,
should be know. isotropic with
variance(4.0 pixels).
   Error covariance
   Kalman gain.
   Observation matrix
−                    −
     xk =
     ˆ      xk
            ˆ     + Kk (zk −   H xk )
                                 ˆ

                               The measurement is re-
                               projection of point.
                  T
zj =      (R(ρ) Zj + t)

ρ, t are the camera-to-world rotation Euler angles and translation
   of the camera.
Zj is the 3D world coordinate system position of point j.
This measurement is nonlinear in the estimated parameters and
this motivates use of the iterated extended Kalman filter.
−                    −
     xk =
     ˆ      xk
            ˆ     + Kk (zk −   H xk )
                                 ˆ

                               The measurement is re-
                               projection of point.
                  T
zj =      (R(ρ) Zj + t)

ρ, t are the camera-to-world rotation Euler angles and translation
   of the camera.
Zj is the 3D world coordinate system position of point j.
This measurement is nonlinear in the estimated parameters and
this motivates use of the iterated extended Kalman filter.
Initial state estimate distribution
is done using batch algorithm[1]
to get mean and covariance.

This estimates initial 6D camera
positions corresponding to
several images in the sequence.

29.2m traveled and average
error=22.9cm and maximum
error=72.7cm.
é
y


x
y   Robert Collins CSE486, Penn State




x




             λ1 = large , λ2 = small
y   Robert Collins CSE486, Penn State




x




              λ1 = small , λ2 = small
y   Robert Collins CSE486, Penn State




x




            λ1 = large , λ2 = large
2
E(u, v) =         w(x, y)[I(x + u, y + v) − I(x, y)]
            x,y



                                 ≈          [I(x, y) + uIx + vIy − I(x, y)]2
                                      x,y

                                 =          u2 Ix + 2uvIx Iy + v 2 Iy
                                                2                   2

                                      x,y
                                                            2
                                                           Ix Ix Iy              u
                                 =            u v                 2
                                                           Ix Iy Iy              v
                                      x,y
                                                                2
                                                               Ix Ix Iy              u
                                 =      u v      (                    2     )
                                                               Ix Iy Iy              v
                                                     x,y

                                                           u                               2
                                                                                          Ix Ix Iy
                          E(u, v) ∼
                                  =     u v      M               ,M =           w(x, y)          2
                                                           v                              Ix Iy Iy
                                                                          x,y
R = detM − k(traceM )2
       2 2       2    2
    = Ix Iy − k(Ix + Iy )


                                        2
   detM =λ1 λ2                      α =Ix
                                        2
traceM =λ1 + λ2                     β =Iy
                                    Ix =Gx ∗ I
k is an empirically determined           σ

constant range from 0.04~0.06       Iy =Gy ∗ I
                                         σ




                          2
                         Ix Ix Iy
   M=          w(x, y)          2
                         Ix Iy Iy
         x,y
R = detM − k(traceM )2
       2 2       2    2
    = Ix Iy − k(Ix + Iy )


                                        2
   detM =λ1 λ2                      α =Ix
                                        2
traceM =λ1 + λ2                     β =Iy
                                    Ix =Gx ∗ I
k is an empirically determined           σ

constant range from 0.04~0.06       Iy =Gy ∗ I
                                         σ




                          2
                         Ix Ix Iy
   M=          w(x, y)          2
                         Ix Iy Iy
         x,y
                                                 Source from [3]
For each detected feature, search every features within a
certain disparity limit from the next image.
(10% of image size)




                                            (t)

                                              (t-1)
For each detected feature, calculate the normalized
correlation using 11x11 window.
              A=           I
                     x,y

              B=           I2
                     x,y
                        1
              C =√
                      nB − A2
              D=           I1 I2
                     x,y


  n = 121, 11 × 11

The normalized correlation         Find the highest value of NC,
between two patches is             (Mutual consistency check)
    N C1,2 = (nD − A1 A2 )C1 C2          = max(N C1, 2)
Circles shows the current feature locations
and lines are feature tracks over the images
Track matched features and estimate relative position
using 5-points algorithm. RANSAC refines position.
Track matched features and estimate relative position
using 5-points algorithm. RANSAC refines position.

Construct 3D points with first and last observation
and estimate the scale factor.
Track matched features and estimate relative position
using 5-points algorithm. RANSAC refines position.

Construct 3D points with first and last observation
and estimate the scale factor.

Track additional number of frames and compute the
position of camera with known 3D point using
3-point algorithm. RANSAC refines positions.
Track matched features and estimate relative position
using 5-points algorithm. RANSAC refines position.

Construct 3D points with first and last observation
and estimate the scale factor.

Track additional number of frames and compute the
position of camera with known 3D point using
3-point algorithm. RANSAC refines positions.
Triangulate the observed matches into 3D points.




                    http://en.wikipedia.org/wiki/File:TriangulationReal.svg
= abs(y1 − y1 )
Triangulate the observed matches into 3D points.

              Track features for a certain number of frames
              and calculate the position of stereo rig and
              refine with RANSAC and 3points algorithm.
E{(p1 , p1 ), (p2 , p2 ), (p3 , p3 )}

From this equation, we                  p1
could get R,T matrix.                                t
                                        p2   p3



                                        p1            t-1
                                              p3
                                        p2
Triangulate the observed matches into 3D points.

              Track features for a certain number of frames
              and calculate the position of stereo rig and
              refine with RANSAC and 3points algorithm.
E{(p1 , p1 ), (p2 , p2 ), (p3 , p3 )}

From this equation, we                  p1
could get R,T matrix.                                t
                                        p2   p3



                                        p1            t-1
                                              p3
                                        p2
Triangulate the observed matches into 3D points.

              Track features for a certain number of frames
              and calculate the position of stereo rig and
              refine with RANSAC and 3points algorithm.
E{(p1 , p1 ), (p2 , p2 ), (p3 , p3 )}

From this equation, we                  p1
could get R,T matrix.                                t
                                        p2   p3



                                        p1            t-1
                                              p3
                                        p2
Triangulate the observed matches into 3D points.

Track features for a certain number of frames
and calculate the position of stereo rig and
refine with RANSAC and 3points algorithm.

Triangulate all new feature matches and repeat
previous step a certain number of time.
Triangulate the observed matches into 3D points.

Track features for a certain number of frames
and calculate the position of stereo rig and
refine with RANSAC and 3points algorithm.

Triangulate all new feature matches and repeat
previous step a certain number of time.
Note: In this paper, fire wall refers to the tool in order to avoid error
propagation. Idea is that don’t triangulate of 3D points using observation beyond
the most recent firewall.


                                  time
   projection error                                 Set the firewall at this frame
                                                      Then using from this frame
                                                      to triangulate 3D points.



                                                         time
Image size: 720x240
Baseline: 28cm
HVOF: 50
Image size: 720x240
Baseline: 28cm
HVOF: 50
Visual Odometry’s frame processing rate
is around 13Hz.
No a priori knowledge of the motion.
3D trajectory is estimated.
DGPS accuray in RG-2 mode is 2cm
Red=VO, Blue=DGPS, Traveling=184m,
Error of the endpoint is 4.1 meters.
Frame-to-frame error analysis of the
vehicle heading estimates. Approximately
zero-mean suggests that estimates are not
biased.
Unit=metre
                                            Autonomous run
                                            GPS-(Gyro+Wheel)=0.29m
                                            GPS-(Gyro+Vis)=0.77m
                                            Remote control
                                            GPS-(Gyro+Wheel)=-6.78m
Official runs to report results of visual   GPS-(Gyro+Vis)=3.5m
odometry to DARPA. “Remote” means
manual control by a person who is not a
member of the vo team.




Distance from true DGPS position at the
end of eacho run. (in metres)
Blue=DGPS
Green=Gyro+Vo
Red=Gyro+Wheel
Red=Vo
Green=Wheel
Dark plus(Blue)=DGPS
Thick line(Green)=Vo
Thin line(Red)=Wheel+IMU
Dark plus(Blue)=DGPS
Thick line(Green)=Vo
Thin line(Red)=Wheel+IMU



 Because of slippage on
 muddy trail
Dark plus(Blue)=DGPS
Thick line(Green)=Vo
Thin line(Red)=Wheel+IMU
Dark plus(Blue)=DGPS       Dark plus(Blue)=DGPS
Thick line(Green)=Vo       Thick line(Green)=Vo
Thin line(Red)=Wheel+IMU   Thin line(Red)=Wheel+Vo
Thank you
Visual odometry presentation_without_video
Visual odometry presentation_without_video

More Related Content

What's hot

Camera calibration
Camera calibrationCamera calibration
Camera calibrationYuji Oyamada
 
Moving object detection in video surveillance
Moving object detection in video surveillanceMoving object detection in video surveillance
Moving object detection in video surveillanceAshfaqul Haque John
 
“Introduction to Simultaneous Localization and Mapping (SLAM),” a Presentatio...
“Introduction to Simultaneous Localization and Mapping (SLAM),” a Presentatio...“Introduction to Simultaneous Localization and Mapping (SLAM),” a Presentatio...
“Introduction to Simultaneous Localization and Mapping (SLAM),” a Presentatio...Edge AI and Vision Alliance
 
Path Planning And Navigation
Path Planning And NavigationPath Planning And Navigation
Path Planning And Navigationguest90654fd
 
Human pose estimation with deep learning
Human pose estimation with deep learningHuman pose estimation with deep learning
Human pose estimation with deep learningengiyad95
 
Applying deep learning to medical data
Applying deep learning to medical dataApplying deep learning to medical data
Applying deep learning to medical dataHyun-seok Min
 
3D Perception for Autonomous Driving - Datasets and Algorithms -
3D Perception for Autonomous Driving - Datasets and Algorithms -3D Perception for Autonomous Driving - Datasets and Algorithms -
3D Perception for Autonomous Driving - Datasets and Algorithms -Kazuyuki Miyazawa
 
Real Time Object Tracking
Real Time Object TrackingReal Time Object Tracking
Real Time Object TrackingVanya Valindria
 
AI Computer vision
AI Computer visionAI Computer vision
AI Computer visionKashafnaz2
 
object recognition for robots
object recognition for robotsobject recognition for robots
object recognition for robotss1240148
 
fusion of Camera and lidar for autonomous driving I
fusion of Camera and lidar for autonomous driving Ifusion of Camera and lidar for autonomous driving I
fusion of Camera and lidar for autonomous driving IYu Huang
 
ORB SLAM Proposal for NTU GPU Programming Course 2016
ORB SLAM Proposal for NTU GPU Programming Course 2016ORB SLAM Proposal for NTU GPU Programming Course 2016
ORB SLAM Proposal for NTU GPU Programming Course 2016Mindos Cheng
 

What's hot (20)

Computer vision
Computer visionComputer vision
Computer vision
 
Introduction of slam
Introduction of slamIntroduction of slam
Introduction of slam
 
Computer vision
Computer visionComputer vision
Computer vision
 
Computer vision
Computer visionComputer vision
Computer vision
 
Camera calibration
Camera calibrationCamera calibration
Camera calibration
 
Object detection
Object detectionObject detection
Object detection
 
Moving object detection in video surveillance
Moving object detection in video surveillanceMoving object detection in video surveillance
Moving object detection in video surveillance
 
“Introduction to Simultaneous Localization and Mapping (SLAM),” a Presentatio...
“Introduction to Simultaneous Localization and Mapping (SLAM),” a Presentatio...“Introduction to Simultaneous Localization and Mapping (SLAM),” a Presentatio...
“Introduction to Simultaneous Localization and Mapping (SLAM),” a Presentatio...
 
Path Planning And Navigation
Path Planning And NavigationPath Planning And Navigation
Path Planning And Navigation
 
Human pose estimation with deep learning
Human pose estimation with deep learningHuman pose estimation with deep learning
Human pose estimation with deep learning
 
Aerial Robotics
Aerial RoboticsAerial Robotics
Aerial Robotics
 
Applying deep learning to medical data
Applying deep learning to medical dataApplying deep learning to medical data
Applying deep learning to medical data
 
3D Perception for Autonomous Driving - Datasets and Algorithms -
3D Perception for Autonomous Driving - Datasets and Algorithms -3D Perception for Autonomous Driving - Datasets and Algorithms -
3D Perception for Autonomous Driving - Datasets and Algorithms -
 
Real Time Object Tracking
Real Time Object TrackingReal Time Object Tracking
Real Time Object Tracking
 
AI Computer vision
AI Computer visionAI Computer vision
AI Computer vision
 
object recognition for robots
object recognition for robotsobject recognition for robots
object recognition for robots
 
Ai lecture 03 computer vision
Ai lecture 03 computer visionAi lecture 03 computer vision
Ai lecture 03 computer vision
 
fusion of Camera and lidar for autonomous driving I
fusion of Camera and lidar for autonomous driving Ifusion of Camera and lidar for autonomous driving I
fusion of Camera and lidar for autonomous driving I
 
Object detection
Object detectionObject detection
Object detection
 
ORB SLAM Proposal for NTU GPU Programming Course 2016
ORB SLAM Proposal for NTU GPU Programming Course 2016ORB SLAM Proposal for NTU GPU Programming Course 2016
ORB SLAM Proposal for NTU GPU Programming Course 2016
 

Similar to Visual odometry presentation_without_video

Similar to Visual odometry presentation_without_video (20)

Getmoving as3kinect
Getmoving as3kinectGetmoving as3kinect
Getmoving as3kinect
 
High-Speed Single-Photon SPAD Camera
High-Speed Single-Photon SPAD CameraHigh-Speed Single-Photon SPAD Camera
High-Speed Single-Photon SPAD Camera
 
Kinect v1+Processing workshot fabcafe_taipei
Kinect v1+Processing workshot fabcafe_taipeiKinect v1+Processing workshot fabcafe_taipei
Kinect v1+Processing workshot fabcafe_taipei
 
Scd 2020 r
Scd 2020 rScd 2020 r
Scd 2020 r
 
Scz 3370 p
Scz 3370 pScz 3370 p
Scz 3370 p
 
Scz 3370 p
Scz 3370 pScz 3370 p
Scz 3370 p
 
BWA DiSCAN-PTZ.8 (oct-2012)
BWA DiSCAN-PTZ.8 (oct-2012)BWA DiSCAN-PTZ.8 (oct-2012)
BWA DiSCAN-PTZ.8 (oct-2012)
 
MIT 6.870 - Template Matching and Histograms (Nicolas Pinto, MIT)
MIT 6.870 - Template Matching and Histograms (Nicolas Pinto, MIT)MIT 6.870 - Template Matching and Histograms (Nicolas Pinto, MIT)
MIT 6.870 - Template Matching and Histograms (Nicolas Pinto, MIT)
 
ADAPTIVE FILTER FOR DENOISING 3D DATA CAPTURED BY DEPTH SENSORS
ADAPTIVE FILTER FOR DENOISING 3D DATA CAPTURED BY DEPTH SENSORSADAPTIVE FILTER FOR DENOISING 3D DATA CAPTURED BY DEPTH SENSORS
ADAPTIVE FILTER FOR DENOISING 3D DATA CAPTURED BY DEPTH SENSORS
 
Testo 881 datasheet
Testo 881 datasheetTesto 881 datasheet
Testo 881 datasheet
 
Color Imaging Lab Research Interests 2010
Color Imaging Lab Research Interests 2010Color Imaging Lab Research Interests 2010
Color Imaging Lab Research Interests 2010
 
Sncrx
SncrxSncrx
Sncrx
 
GLS-1000
GLS-1000GLS-1000
GLS-1000
 
Image ORB feature
Image ORB featureImage ORB feature
Image ORB feature
 
Scd 2080 r
Scd 2080 rScd 2080 r
Scd 2080 r
 
Dual photography
Dual photographyDual photography
Dual photography
 
Sco 2080 r
Sco 2080 rSco 2080 r
Sco 2080 r
 
Sco 2080 r
Sco 2080 rSco 2080 r
Sco 2080 r
 
01002250 Ecografo
01002250 Ecografo01002250 Ecografo
01002250 Ecografo
 
Object based image analysis tools for opticks
Object based image analysis tools for opticksObject based image analysis tools for opticks
Object based image analysis tools for opticks
 

Recently uploaded

Strategic AI Integration in Engineering Teams
Strategic AI Integration in Engineering TeamsStrategic AI Integration in Engineering Teams
Strategic AI Integration in Engineering TeamsUXDXConf
 
IESVE for Early Stage Design and Planning
IESVE for Early Stage Design and PlanningIESVE for Early Stage Design and Planning
IESVE for Early Stage Design and PlanningIES VE
 
Demystifying gRPC in .Net by John Staveley
Demystifying gRPC in .Net by John StaveleyDemystifying gRPC in .Net by John Staveley
Demystifying gRPC in .Net by John StaveleyJohn Staveley
 
Linux Foundation Edge _ Overview of FDO Software Components _ Randy at Intel.pdf
Linux Foundation Edge _ Overview of FDO Software Components _ Randy at Intel.pdfLinux Foundation Edge _ Overview of FDO Software Components _ Randy at Intel.pdf
Linux Foundation Edge _ Overview of FDO Software Components _ Randy at Intel.pdfFIDO Alliance
 
Salesforce Adoption – Metrics, Methods, and Motivation, Antone Kom
Salesforce Adoption – Metrics, Methods, and Motivation, Antone KomSalesforce Adoption – Metrics, Methods, and Motivation, Antone Kom
Salesforce Adoption – Metrics, Methods, and Motivation, Antone KomCzechDreamin
 
FDO for Camera, Sensor and Networking Device – Commercial Solutions from VinC...
FDO for Camera, Sensor and Networking Device – Commercial Solutions from VinC...FDO for Camera, Sensor and Networking Device – Commercial Solutions from VinC...
FDO for Camera, Sensor and Networking Device – Commercial Solutions from VinC...FIDO Alliance
 
THE BEST IPTV in GERMANY for 2024: IPTVreel
THE BEST IPTV in  GERMANY for 2024: IPTVreelTHE BEST IPTV in  GERMANY for 2024: IPTVreel
THE BEST IPTV in GERMANY for 2024: IPTVreelreely ones
 
UiPath Test Automation using UiPath Test Suite series, part 1
UiPath Test Automation using UiPath Test Suite series, part 1UiPath Test Automation using UiPath Test Suite series, part 1
UiPath Test Automation using UiPath Test Suite series, part 1DianaGray10
 
ECS 2024 Teams Premium - Pretty Secure
ECS 2024   Teams Premium - Pretty SecureECS 2024   Teams Premium - Pretty Secure
ECS 2024 Teams Premium - Pretty SecureFemke de Vroome
 
Integrating Telephony Systems with Salesforce: Insights and Considerations, B...
Integrating Telephony Systems with Salesforce: Insights and Considerations, B...Integrating Telephony Systems with Salesforce: Insights and Considerations, B...
Integrating Telephony Systems with Salesforce: Insights and Considerations, B...CzechDreamin
 
The Metaverse: Are We There Yet?
The  Metaverse:    Are   We  There  Yet?The  Metaverse:    Are   We  There  Yet?
The Metaverse: Are We There Yet?Mark Billinghurst
 
Connecting the Dots in Product Design at KAYAK
Connecting the Dots in Product Design at KAYAKConnecting the Dots in Product Design at KAYAK
Connecting the Dots in Product Design at KAYAKUXDXConf
 
Future Visions: Predictions to Guide and Time Tech Innovation, Peter Udo Diehl
Future Visions: Predictions to Guide and Time Tech Innovation, Peter Udo DiehlFuture Visions: Predictions to Guide and Time Tech Innovation, Peter Udo Diehl
Future Visions: Predictions to Guide and Time Tech Innovation, Peter Udo DiehlPeter Udo Diehl
 
10 Differences between Sales Cloud and CPQ, Blanka Doktorová
10 Differences between Sales Cloud and CPQ, Blanka Doktorová10 Differences between Sales Cloud and CPQ, Blanka Doktorová
10 Differences between Sales Cloud and CPQ, Blanka DoktorováCzechDreamin
 
Agentic RAG What it is its types applications and implementation.pdf
Agentic RAG What it is its types applications and implementation.pdfAgentic RAG What it is its types applications and implementation.pdf
Agentic RAG What it is its types applications and implementation.pdfChristopherTHyatt
 
Syngulon - Selection technology May 2024.pdf
Syngulon - Selection technology May 2024.pdfSyngulon - Selection technology May 2024.pdf
Syngulon - Selection technology May 2024.pdfSyngulon
 
Optimizing NoSQL Performance Through Observability
Optimizing NoSQL Performance Through ObservabilityOptimizing NoSQL Performance Through Observability
Optimizing NoSQL Performance Through ObservabilityScyllaDB
 
Extensible Python: Robustness through Addition - PyCon 2024
Extensible Python: Robustness through Addition - PyCon 2024Extensible Python: Robustness through Addition - PyCon 2024
Extensible Python: Robustness through Addition - PyCon 2024Patrick Viafore
 
PLAI - Acceleration Program for Generative A.I. Startups
PLAI - Acceleration Program for Generative A.I. StartupsPLAI - Acceleration Program for Generative A.I. Startups
PLAI - Acceleration Program for Generative A.I. StartupsStefano
 
What's New in Teams Calling, Meetings and Devices April 2024
What's New in Teams Calling, Meetings and Devices April 2024What's New in Teams Calling, Meetings and Devices April 2024
What's New in Teams Calling, Meetings and Devices April 2024Stephanie Beckett
 

Recently uploaded (20)

Strategic AI Integration in Engineering Teams
Strategic AI Integration in Engineering TeamsStrategic AI Integration in Engineering Teams
Strategic AI Integration in Engineering Teams
 
IESVE for Early Stage Design and Planning
IESVE for Early Stage Design and PlanningIESVE for Early Stage Design and Planning
IESVE for Early Stage Design and Planning
 
Demystifying gRPC in .Net by John Staveley
Demystifying gRPC in .Net by John StaveleyDemystifying gRPC in .Net by John Staveley
Demystifying gRPC in .Net by John Staveley
 
Linux Foundation Edge _ Overview of FDO Software Components _ Randy at Intel.pdf
Linux Foundation Edge _ Overview of FDO Software Components _ Randy at Intel.pdfLinux Foundation Edge _ Overview of FDO Software Components _ Randy at Intel.pdf
Linux Foundation Edge _ Overview of FDO Software Components _ Randy at Intel.pdf
 
Salesforce Adoption – Metrics, Methods, and Motivation, Antone Kom
Salesforce Adoption – Metrics, Methods, and Motivation, Antone KomSalesforce Adoption – Metrics, Methods, and Motivation, Antone Kom
Salesforce Adoption – Metrics, Methods, and Motivation, Antone Kom
 
FDO for Camera, Sensor and Networking Device – Commercial Solutions from VinC...
FDO for Camera, Sensor and Networking Device – Commercial Solutions from VinC...FDO for Camera, Sensor and Networking Device – Commercial Solutions from VinC...
FDO for Camera, Sensor and Networking Device – Commercial Solutions from VinC...
 
THE BEST IPTV in GERMANY for 2024: IPTVreel
THE BEST IPTV in  GERMANY for 2024: IPTVreelTHE BEST IPTV in  GERMANY for 2024: IPTVreel
THE BEST IPTV in GERMANY for 2024: IPTVreel
 
UiPath Test Automation using UiPath Test Suite series, part 1
UiPath Test Automation using UiPath Test Suite series, part 1UiPath Test Automation using UiPath Test Suite series, part 1
UiPath Test Automation using UiPath Test Suite series, part 1
 
ECS 2024 Teams Premium - Pretty Secure
ECS 2024   Teams Premium - Pretty SecureECS 2024   Teams Premium - Pretty Secure
ECS 2024 Teams Premium - Pretty Secure
 
Integrating Telephony Systems with Salesforce: Insights and Considerations, B...
Integrating Telephony Systems with Salesforce: Insights and Considerations, B...Integrating Telephony Systems with Salesforce: Insights and Considerations, B...
Integrating Telephony Systems with Salesforce: Insights and Considerations, B...
 
The Metaverse: Are We There Yet?
The  Metaverse:    Are   We  There  Yet?The  Metaverse:    Are   We  There  Yet?
The Metaverse: Are We There Yet?
 
Connecting the Dots in Product Design at KAYAK
Connecting the Dots in Product Design at KAYAKConnecting the Dots in Product Design at KAYAK
Connecting the Dots in Product Design at KAYAK
 
Future Visions: Predictions to Guide and Time Tech Innovation, Peter Udo Diehl
Future Visions: Predictions to Guide and Time Tech Innovation, Peter Udo DiehlFuture Visions: Predictions to Guide and Time Tech Innovation, Peter Udo Diehl
Future Visions: Predictions to Guide and Time Tech Innovation, Peter Udo Diehl
 
10 Differences between Sales Cloud and CPQ, Blanka Doktorová
10 Differences between Sales Cloud and CPQ, Blanka Doktorová10 Differences between Sales Cloud and CPQ, Blanka Doktorová
10 Differences between Sales Cloud and CPQ, Blanka Doktorová
 
Agentic RAG What it is its types applications and implementation.pdf
Agentic RAG What it is its types applications and implementation.pdfAgentic RAG What it is its types applications and implementation.pdf
Agentic RAG What it is its types applications and implementation.pdf
 
Syngulon - Selection technology May 2024.pdf
Syngulon - Selection technology May 2024.pdfSyngulon - Selection technology May 2024.pdf
Syngulon - Selection technology May 2024.pdf
 
Optimizing NoSQL Performance Through Observability
Optimizing NoSQL Performance Through ObservabilityOptimizing NoSQL Performance Through Observability
Optimizing NoSQL Performance Through Observability
 
Extensible Python: Robustness through Addition - PyCon 2024
Extensible Python: Robustness through Addition - PyCon 2024Extensible Python: Robustness through Addition - PyCon 2024
Extensible Python: Robustness through Addition - PyCon 2024
 
PLAI - Acceleration Program for Generative A.I. Startups
PLAI - Acceleration Program for Generative A.I. StartupsPLAI - Acceleration Program for Generative A.I. Startups
PLAI - Acceleration Program for Generative A.I. Startups
 
What's New in Teams Calling, Meetings and Devices April 2024
What's New in Teams Calling, Meetings and Devices April 2024What's New in Teams Calling, Meetings and Devices April 2024
What's New in Teams Calling, Meetings and Devices April 2024
 

Visual odometry presentation_without_video

  • 1. Visual odometry by Inkyu Sa
  • 3. Motivation his laser scanner is good enough to obtain the position (x, y, θ, z) of the quadrotor at 10Hz. This data provides from ROS canonical scan matcher package. 0.5 0.4 0.3 0.2 0.1 y position(m) 0 −0.1 −0.2 −0.3 −0.4 −0.5 −0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 x position(m)
  • 4. Motivation his laser scanner is good enough to obtain the position (x, y, θ, z) of the quadrotor at 10Hz. This data provides from ROS canonical scan matcher package. 0.5 0.4 0.3 0.2 - Relatively high accuracy. 0.1 y position(m) - ROS device driver support. 0 −0.1 −0.2 −0.3 −0.4 −0.5 −0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 x position(m)
  • 5. Motivation his laser scanner is good enough to obtain the position (x, y, θ, z) of the quadrotor at 10Hz. This data provides from ROS canonical scan matcher package. 0.5 0.4 0.3 0.2 - Relatively high accuracy. 0.1 y position(m) - ROS device driver support. 0 −0.1 −0.2 - Expensive, USD 2375 −0.3 - Low frequency 10Hz −0.4 - Only for 2D. −0.5 −0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 x position(m)
  • 7. Motivation inect 3D depth camera can provide not only 2D RGB images but 3D depth images at 30Hz. http://www.ifixit.com
  • 8. Motivation inect 3D depth camera can provide not only 2D RGB images but 3D depth images at 30Hz. http://www.ifixit.com - Reasonable price. AUD 180. - 3 Dimensional information. - Openni Kinect ROS device driver and point could library support. - Available to use for visual odometry and object recognition, 3D SLAM and so on.
  • 9. Motivation inect 3D depth camera can provide not only 2D RGB images but 3D depth images at 30Hz. http://www.ifixit.com - Relatively low accuracy and many noise. - Reasonable price. AUD 180. - Heavy weight. original kinect over 500g. - 3 Dimensional information. - Openni Kinect ROS device driver and - Requires high computational power. point could library support. ◦ ◦ - Narrow filed of view. H=57,V=43 - Available to use for visual odometry and object recognition, 3D SLAM and so on.
  • 10. Motivation inect 3D depth camera can provide not only 2D RGB images but 3D depth images at 30Hz. http://www.ifixit.com - Relatively low accuracy and many noise. - Reasonable price. AUD 180. - Heavy weight. original kinect over 500g. - 3 Dimensional information. - Openni Kinect ROS device driver and - Requires high computational power. point could library support. ◦ ◦ - Narrow filed of view. H=57,V=43 - Available to use for visual odometry and object recognition, 3D SLAM and so on.
  • 11. Motivation inect 3D depth camera can provide not only 2D RGB images but 3D depth images at 30Hz. http://www.ifixit.com - Relatively low accuracy and many noise. - Reasonable price. AUD 180. - Heavy weight. original kinect over 500g. - 3 Dimensional information. - Openni Kinect ROS device driver and - Requires high computational power. point could library support. ◦ ◦ - Narrow filed of view. H=57,V=43 - Available to use for visual odometry and object recognition, 3D SLAM and so on.
  • 12. Motivation inect 3D depth camera can provide not only 2D RGB images but 3D depth images at 30Hz. http://www.ifixit.com - Relatively low accuracy and many noise. - Reasonable price. AUD 180. - Heavy weight. original kinect over 500g. - 3 Dimensional information. - Openni Kinect ROS device driver and - Requires high computational power. point could library support. ◦ ◦ - Narrow filed of view. H=57,V=43 - Available to use for visual odometry and object recognition, 3D SLAM and so on.
  • 13. Motivation inect 3D depth camera can provide not only 2D RGB images but 3D depth images at 30Hz. http://www.ifixit.com - Relatively low accuracy and many noise. - Reasonable price. AUD 180. - Heavy weight. original kinect over 500g. - 3 Dimensional information. - Openni Kinect ROS device driver and - Requires high computational power. point could library support. ◦ ◦ - Narrow filed of view. H=57,V=43 - Available to use for visual odometry and object recognition, 3D SLAM and so on.
  • 16.
  • 17.
  • 18. ◦ ◦
  • 19. ◦ ◦
  • 20. ◦ ◦
  • 21.    x a  y =  b  z 1 a = tan{α tan−1 u/f } cos β b = tan{α tan−1 v/f } sin β u =x point of image plane. v =y point of image plane.
  • 22. (∆x, ∆y, ∆θ) (x , y ) (u, v) (u , v ) (x, y) ˆ ˆ (du, dv) = P (u, v, {u0 , v0 , f, α}, {∆x, ∆y, ∆θ}) P is optical flow function of the feature coordinate. t t+1
  • 23. e1 = med ˆ ˆ (dui − dui )2 ) + (dvi − dvi )2 ) e1
  • 24.
  • 25. Solar powered robot, Hyperion, developed by CMU.
  • 26. Solar powered robot, Hyperion, developed by CMU. The parameter estimates are somewhat noisy but closely with those determined using a CMU calibration method. estimates=(Value) Calibration method=(True)
  • 27. R W x ˙ x ˙ R = RZ (θ) W y ˙ y ˙ Then integration of the robot velocity using sample time can be produce the position of the robot as shown the left image. R R x x ˙ R = R ∆t y y ˙
  • 28. Using the following equation, the observed robot coordinate velocity can be calculated. R W x ˙ x ˙ R = RZ (θ) W y ˙ y ˙ Then integration of the robot velocity using sample time can be produce the position of the robot as shown the left image. R R x x ˙ R = R ∆t y y ˙
  • 29.
  • 30.
  • 31. 6DOF of camera position + 3DOF of features position. Observation vector,the projection data for the current image. Process noise covariance,should be known. Measurement noise covariance, should be know. isotropic with variance(4.0 pixels). Error covariance Kalman gain. Observation matrix
  • 32. − xk = ˆ xk ˆ + Kk (zk − H xk ) ˆ The measurement is re- projection of point. T zj = (R(ρ) Zj + t) ρ, t are the camera-to-world rotation Euler angles and translation of the camera. Zj is the 3D world coordinate system position of point j. This measurement is nonlinear in the estimated parameters and this motivates use of the iterated extended Kalman filter.
  • 33. − xk = ˆ xk ˆ + Kk (zk − H xk ) ˆ The measurement is re- projection of point. T zj = (R(ρ) Zj + t) ρ, t are the camera-to-world rotation Euler angles and translation of the camera. Zj is the 3D world coordinate system position of point j. This measurement is nonlinear in the estimated parameters and this motivates use of the iterated extended Kalman filter.
  • 34. Initial state estimate distribution is done using batch algorithm[1] to get mean and covariance. This estimates initial 6D camera positions corresponding to several images in the sequence. 29.2m traveled and average error=22.9cm and maximum error=72.7cm.
  • 35.
  • 36.
  • 37.
  • 38.
  • 39. é
  • 40.
  • 41.
  • 42. y x
  • 43. y Robert Collins CSE486, Penn State x λ1 = large , λ2 = small
  • 44. y Robert Collins CSE486, Penn State x λ1 = small , λ2 = small
  • 45. y Robert Collins CSE486, Penn State x λ1 = large , λ2 = large
  • 46. 2 E(u, v) = w(x, y)[I(x + u, y + v) − I(x, y)] x,y ≈ [I(x, y) + uIx + vIy − I(x, y)]2 x,y = u2 Ix + 2uvIx Iy + v 2 Iy 2 2 x,y 2 Ix Ix Iy u = u v 2 Ix Iy Iy v x,y 2 Ix Ix Iy u = u v ( 2 ) Ix Iy Iy v x,y u 2 Ix Ix Iy E(u, v) ∼ = u v M ,M = w(x, y) 2 v Ix Iy Iy x,y
  • 47. R = detM − k(traceM )2 2 2 2 2 = Ix Iy − k(Ix + Iy ) 2 detM =λ1 λ2 α =Ix 2 traceM =λ1 + λ2 β =Iy Ix =Gx ∗ I k is an empirically determined σ constant range from 0.04~0.06 Iy =Gy ∗ I σ 2 Ix Ix Iy M= w(x, y) 2 Ix Iy Iy x,y
  • 48. R = detM − k(traceM )2 2 2 2 2 = Ix Iy − k(Ix + Iy ) 2 detM =λ1 λ2 α =Ix 2 traceM =λ1 + λ2 β =Iy Ix =Gx ∗ I k is an empirically determined σ constant range from 0.04~0.06 Iy =Gy ∗ I σ 2 Ix Ix Iy M= w(x, y) 2 Ix Iy Iy x,y Source from [3]
  • 49.
  • 50. For each detected feature, search every features within a certain disparity limit from the next image. (10% of image size) (t) (t-1)
  • 51. For each detected feature, calculate the normalized correlation using 11x11 window. A= I x,y B= I2 x,y 1 C =√ nB − A2 D= I1 I2 x,y n = 121, 11 × 11 The normalized correlation Find the highest value of NC, between two patches is (Mutual consistency check) N C1,2 = (nD − A1 A2 )C1 C2 = max(N C1, 2)
  • 52. Circles shows the current feature locations and lines are feature tracks over the images
  • 53. Track matched features and estimate relative position using 5-points algorithm. RANSAC refines position.
  • 54. Track matched features and estimate relative position using 5-points algorithm. RANSAC refines position. Construct 3D points with first and last observation and estimate the scale factor.
  • 55. Track matched features and estimate relative position using 5-points algorithm. RANSAC refines position. Construct 3D points with first and last observation and estimate the scale factor. Track additional number of frames and compute the position of camera with known 3D point using 3-point algorithm. RANSAC refines positions.
  • 56. Track matched features and estimate relative position using 5-points algorithm. RANSAC refines position. Construct 3D points with first and last observation and estimate the scale factor. Track additional number of frames and compute the position of camera with known 3D point using 3-point algorithm. RANSAC refines positions.
  • 57. Triangulate the observed matches into 3D points. http://en.wikipedia.org/wiki/File:TriangulationReal.svg = abs(y1 − y1 )
  • 58. Triangulate the observed matches into 3D points. Track features for a certain number of frames and calculate the position of stereo rig and refine with RANSAC and 3points algorithm. E{(p1 , p1 ), (p2 , p2 ), (p3 , p3 )} From this equation, we p1 could get R,T matrix. t p2 p3 p1 t-1 p3 p2
  • 59. Triangulate the observed matches into 3D points. Track features for a certain number of frames and calculate the position of stereo rig and refine with RANSAC and 3points algorithm. E{(p1 , p1 ), (p2 , p2 ), (p3 , p3 )} From this equation, we p1 could get R,T matrix. t p2 p3 p1 t-1 p3 p2
  • 60. Triangulate the observed matches into 3D points. Track features for a certain number of frames and calculate the position of stereo rig and refine with RANSAC and 3points algorithm. E{(p1 , p1 ), (p2 , p2 ), (p3 , p3 )} From this equation, we p1 could get R,T matrix. t p2 p3 p1 t-1 p3 p2
  • 61. Triangulate the observed matches into 3D points. Track features for a certain number of frames and calculate the position of stereo rig and refine with RANSAC and 3points algorithm. Triangulate all new feature matches and repeat previous step a certain number of time.
  • 62. Triangulate the observed matches into 3D points. Track features for a certain number of frames and calculate the position of stereo rig and refine with RANSAC and 3points algorithm. Triangulate all new feature matches and repeat previous step a certain number of time.
  • 63. Note: In this paper, fire wall refers to the tool in order to avoid error propagation. Idea is that don’t triangulate of 3D points using observation beyond the most recent firewall. time projection error Set the firewall at this frame Then using from this frame to triangulate 3D points. time
  • 64.
  • 65.
  • 68.
  • 69. Visual Odometry’s frame processing rate is around 13Hz. No a priori knowledge of the motion. 3D trajectory is estimated. DGPS accuray in RG-2 mode is 2cm
  • 70.
  • 71. Red=VO, Blue=DGPS, Traveling=184m, Error of the endpoint is 4.1 meters.
  • 72.
  • 73.
  • 74.
  • 75. Frame-to-frame error analysis of the vehicle heading estimates. Approximately zero-mean suggests that estimates are not biased.
  • 76.
  • 77.
  • 78. Unit=metre Autonomous run GPS-(Gyro+Wheel)=0.29m GPS-(Gyro+Vis)=0.77m Remote control GPS-(Gyro+Wheel)=-6.78m Official runs to report results of visual GPS-(Gyro+Vis)=3.5m odometry to DARPA. “Remote” means manual control by a person who is not a member of the vo team. Distance from true DGPS position at the end of eacho run. (in metres)
  • 79.
  • 81.
  • 84. Dark plus(Blue)=DGPS Thick line(Green)=Vo Thin line(Red)=Wheel+IMU Because of slippage on muddy trail
  • 86. Dark plus(Blue)=DGPS Dark plus(Blue)=DGPS Thick line(Green)=Vo Thick line(Green)=Vo Thin line(Red)=Wheel+IMU Thin line(Red)=Wheel+Vo
  • 87.
  • 88.
  • 89.
  • 90.
  • 91.
  • 92.
  • 93.

Editor's Notes

  1. \n
  2. This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n\nExplain advantages and disadvantage.\n\nLet’s look at vision sensor for visual odometry.\n
  3. This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n\nExplain advantages and disadvantage.\n\nLet’s look at vision sensor for visual odometry.\n
  4. This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n\nExplain advantages and disadvantage.\n\nLet’s look at vision sensor for visual odometry.\n
  5. This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n\nExplain advantages and disadvantage.\n\nLet’s look at vision sensor for visual odometry.\n
  6. This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n\nExplain advantages and disadvantage.\n\nLet’s look at vision sensor for visual odometry.\n
  7. This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n\nExplain advantages and disadvantage.\n\nLet’s look at vision sensor for visual odometry.\n
  8. This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n
  9. This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n
  10. This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n
  11. This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n
  12. This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n
  13. This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n
  14. This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n
  15. This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n
  16. This is our quadrotor. Currently we use the laser scanner to get the position.\n\nStdev for x=0.13m and y=0.09m, The graph is 1m x 1m for 2D. \n
  17. \n
  18. \n
  19. \n
  20. \n
  21. \n
  22. \n
  23. \n
  24. \n
  25. \n
  26. \n
  27. \n
  28. \n
  29. principle point u0,v0, focal length f, elevation gain alpha\nP = a \n
  30. principle point u0,v0, focal length f, elevation gain alpha\nP = a \n
  31. principle point u0,v0, focal length f, elevation gain alpha\nP = a \n
  32. principle point u0,v0, focal length f, elevation gain alpha\nP = a \n
  33. principle point u0,v0, focal length f, elevation gain alpha\nP = a \n
  34. principle point u0,v0, focal length f, elevation gain alpha\nP = a \n
  35. principle point u0,v0, focal length f, elevation gain alpha\nP = a \n
  36. principle point u0,v0, focal length f, elevation gain alpha\nP = a \n
  37. principle point u0,v0, focal length f, elevation gain alpha\nP = a \n
  38. principle point u0,v0, focal length f, elevation gain alpha\nP = a \n
  39. The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
  40. The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
  41. The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
  42. The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
  43. The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
  44. The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
  45. The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
  46. The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
  47. The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
  48. The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
  49. The different approach which is proposed in this paper is structure from motion.\nx_hat=posteriori state estimate\nx_hat_minus=priori state estimate\n
  50. \n
  51. \n
  52. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  53. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  54. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  55. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  56. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  57. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  58. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  59. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  60. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  61. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  62. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  63. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  64. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  65. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  66. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  67. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  68. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  69. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  70. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  71. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  72. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  73. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  74. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  75. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  76. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  77. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  78. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  79. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  80. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  81. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  82. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  83. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  84. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  85. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  86. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  87. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  88. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  89. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  90. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  91. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  92. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  93. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  94. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  95. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  96. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  97. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  98. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  99. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  100. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  101. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  102. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  103. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  104. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  105. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  106. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  107. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  108. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  109. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  110. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  111. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  112. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  113. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  114. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  115. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  116. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  117. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  118. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  119. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  120. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  121. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  122. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  123. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  124. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  125. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  126. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  127. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  128. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  129. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  130. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  131. Basic idea: we can calculate the cornet point by looking at intensity value of the window.\nMoving the window in any direction and find the point that yield a large change in appearance.\n\n
  132. Autonomous run: 96.09-95.80 =0.29 GPS-(Gyro+Wheel)\n96.09-95.32=0.77 GPS-(Gyro+Vis)\n\n
  133. Autonomous run: 96.09-95.80 =0.29 GPS-(Gyro+Wheel)\n96.09-95.32=0.77 GPS-(Gyro+Vis)\n\n
  134. Autonomous run: 96.09-95.80 =0.29 GPS-(Gyro+Wheel)\n96.09-95.32=0.77 GPS-(Gyro+Vis)\n\n
  135. Autonomous run: 96.09-95.80 =0.29 GPS-(Gyro+Wheel)\n96.09-95.32=0.77 GPS-(Gyro+Vis)\n\n
  136. Autonomous run: 96.09-95.80 =0.29 GPS-(Gyro+Wheel)\n96.09-95.32=0.77 GPS-(Gyro+Vis)\n\n
  137. Autonomous run: 96.09-95.80 =0.29 GPS-(Gyro+Wheel)\n96.09-95.32=0.77 GPS-(Gyro+Vis)\n\n
  138. Autonomous run: 96.09-95.80 =0.29 GPS-(Gyro+Wheel)\n96.09-95.32=0.77 GPS-(Gyro+Vis)\n\n
  139. Autonomous run: 96.09-95.80 =0.29 GPS-(Gyro+Wheel)\n96.09-95.32=0.77 GPS-(Gyro+Vis)\n\n
  140. Autonomous run: 96.09-95.80 =0.29 GPS-(Gyro+Wheel)\n96.09-95.32=0.77 GPS-(Gyro+Vis)\n\n
  141. Autonomous run: 96.09-95.80 =0.29 GPS-(Gyro+Wheel)\n96.09-95.32=0.77 GPS-(Gyro+Vis)\n\n
  142. Autonomous run: 96.09-95.80 =0.29 GPS-(Gyro+Wheel)\n96.09-95.32=0.77 GPS-(Gyro+Vis)\n\n
  143. Autonomous run: 96.09-95.80 =0.29 GPS-(Gyro+Wheel)\n96.09-95.32=0.77 GPS-(Gyro+Vis)\n\n
  144. Autonomous run: 96.09-95.80 =0.29 GPS-(Gyro+Wheel)\n96.09-95.32=0.77 GPS-(Gyro+Vis)\n\n
  145. Autonomous run: 96.09-95.80 =0.29 GPS-(Gyro+Wheel)\n96.09-95.32=0.77 GPS-(Gyro+Vis)\n\n
  146. \n
  147. principle point u0,v0, focal length f, elevation gain alpha\nP = a \n