SlideShare una empresa de Scribd logo
1 de 106
Descargar para leer sin conexión
Research of a Position
Estimator for a Wheel-based
Mobile Platform
AUTOMATIC CONTROL LABORATORY
Ecole Polytechnique F´ed´erale de Lausanne
Dr. Denis Gillet, MER
INSTITUTE OF AUTOMATIC CONTROL ENGINEERING
Technische Universit¨at M¨unchen
Univ.-Professor Dr.-Ing./Univ. Tokio Martin Buss
Master’s Thesis
cand.ing. Alexandre Jung
born March 25, 1981
Tutors: Dr. Denis Gillet (MER), Dipl.-Ing. Ulrich Unterhinninghofen,
Dipl.-Ing. Kolja K¨uhnlenz
Begin of work: 21. 03. 2005
Intermediate report: 06. 06. 2005
Final report: 22. 07. 2005
Abstract
In this thesis a self localization algorithm for an omnidirectional wheeled mobile plat-
form is described. An Extended Kalman Filter is used to fuse the measurements of
multiple sensors: odometry, a gyroscope, a magnetic tracker and a stereo camera. The
development of sensor error models is followed by a description of the sensor fusion al-
gorithm. This algorithm comprises two separate steps in which the sensors are grouped
according to the incremental or absolute nature of their measurements. The second
part of this work is dedicated to the integration of stereo image processing in the lo-
calization procedure. A simple vision based landmark extraction algorithm combined
with stereo triangulation is integrated in the Extended Kalman Filter to improve pose
estimates. Finally experimental results demonstrate the efficiency of the presented
algorithms.
Acknowledgment
I would like to thank Professor Martin Buss for giving me the opportunity to write
my Master’s Thesis at the Institute of Automatic Control Engineering (LSR), Tech-
nische Universit¨at M¨unchen . I am also deeply grateful to Professor Michel
Deville and Dr. Denis Gillet for enabling the cooperation between the Automatic
Control Laboratory (LA), Ecole Polytechnique F´ed´erale de Lausanne and
the LSR in the frame of this thesis. I would also like to express my deepest gratitude to
my supervisors at LSR Ulrich Unterhinninghofen and Kolja K¨uhnlenz for their valuable
help and assistance in the course of this thesis.
I also wish to thank my family and especially my parents for their encouragement
during my studies.
Alexandre Jung
alexandre.jung@epfl.ch
Contents
1 Motivation and Problem Definition 1
2 Kinematical Model 4
2.1 Computing Incremental Position Updates . . . . . . . . . . . . . . . . . 5
2.1.1 Least Squares Solution . . . . . . . . . . . . . . . . . . . . . . . 6
2.1.2 Differential Drive Solution . . . . . . . . . . . . . . . . . . . . . 8
2.1.3 Experimental Comparison of Both Solutions . . . . . . . . . . . 9
2.2 Absolute Position Updating . . . . . . . . . . . . . . . . . . . . . . . . 14
3 Sensor Error Models 17
3.1 Odometry Error Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.1.1 Pose Uncertainty Model . . . . . . . . . . . . . . . . . . . . . . 19
3.1.2 Analytical Expression for ¯V o
. . . . . . . . . . . . . . . . . . . . 22
3.1.3 Experimental Results and Analysis . . . . . . . . . . . . . . . . 23
3.2 Gyroscope Error Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.2.1 Systematic and Non Systematic Errors . . . . . . . . . . . . . . 30
3.2.2 Experimental Determination of ˙ψg
drift and ˙V g
. . . . . . . . . . 30
i
CONTENTS ii
3.2.3 Experimental Determination of Kg
and ¯V g
. . . . . . . . . . . . 32
3.3 Magnetic Tracker Error Model . . . . . . . . . . . . . . . . . . . . . . . 34
3.3.1 Systematic and Non Systematic Errors . . . . . . . . . . . . . . 34
3.3.2 Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4 Sensor Fusion 39
4.1 The Extended Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . 40
4.2 Delayed-State Measurement Problem . . . . . . . . . . . . . . . . . . . 42
4.3 Merging of Incremental Pose Estimates . . . . . . . . . . . . . . . . . . 43
4.4 Including Absolute Position Updates . . . . . . . . . . . . . . . . . . . 45
5 Improving Localization with a Stereo Camera 48
5.1 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5.2 Hardware Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.3 Depth Map Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.4 Coordinate Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
5.5 Object Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
5.5.1 Stereo Camera Model . . . . . . . . . . . . . . . . . . . . . . . . 52
5.5.2 Triangulation Equations of the Reconstruction Process . . . . . 54
5.6 Systematic and Non Systematic Errors . . . . . . . . . . . . . . . . . . 54
5.6.1 Camera Calibration . . . . . . . . . . . . . . . . . . . . . . . . . 54
5.6.2 Non Systematic Errors . . . . . . . . . . . . . . . . . . . . . . . 59
5.7 Motion Solving Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 63
CONTENTS iii
5.8 Including Landmark Detection in the EKF . . . . . . . . . . . . . . . . 66
5.8.1 Modification of the Time Update Equations . . . . . . . . . . . 68
5.8.2 Modification of the Measurement Update Equations . . . . . . . 71
5.8.3 Filter Initialization . . . . . . . . . . . . . . . . . . . . . . . . . 73
6 Experimental Results 75
6.1 Pose Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
6.1.1 Straight Track . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
6.1.2 Circular Track . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
6.2 Absolute Position Update Frequency . . . . . . . . . . . . . . . . . . . 79
6.3 Filter Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
6.4 Stereo Image Processing . . . . . . . . . . . . . . . . . . . . . . . . . . 82
7 Conclusions and Future Work 88
A Fundamentals on Stochastic Theory 90
List of Figures
1.1 Mobile platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.1 Schematic top view of the mobile platform . . . . . . . . . . . . . . . . 4
2.2 Inadmissible wheel configuration . . . . . . . . . . . . . . . . . . . . . . 6
2.3a Estimated trajectory using dd solution . . . . . . . . . . . . . . . . . . 11
2.3b Estimated trajectory using lsq solution . . . . . . . . . . . . . . . . . . 11
2.4a Estimated orientation using dd solution . . . . . . . . . . . . . . . . . . 12
2.4b Estimated orientation using lsq solution . . . . . . . . . . . . . . . . . . 12
3.1 Distribution of drift compensated gyro measurements ( ˙ψ = 0) . . . . . 32
3.2 Isolines of the magnetic tracker measurement coordinates . . . . . . . . 35
3.3 Comparison of tracker measurements before and after calibration . . . . 37
4.1 Diagram showing the implemented Kalman Filter structure . . . . . . . 44
5.1 Stereo camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
5.2 Disparity in stereo images . . . . . . . . . . . . . . . . . . . . . . . . . 50
5.3 Coordinate systems in lab environment . . . . . . . . . . . . . . . . . . 51
5.4 Stereo camera coordinate systems . . . . . . . . . . . . . . . . . . . . . 52
iv
LIST OF FIGURES v
5.5 Stereo geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
5.6 Camera calibration setup . . . . . . . . . . . . . . . . . . . . . . . . . . 56
5.7 Comparison of ideal and extracted pixel coordinates in calibration pattern 57
5.8 Reconstructed marker locations . . . . . . . . . . . . . . . . . . . . . . 58
5.9 Uncertainty in stereo image processing . . . . . . . . . . . . . . . . . . 60
5.10 Uncertainty in object locations through stereo image processing . . . . 62
5.11 Lab environment equipped with landmarks . . . . . . . . . . . . . . . . 67
5.12 Block diagram of localization algorithm using stereo image processing . 69
6.1 Influence of absolute position update frequency on pose error . . . . . . 80
6.2 Effect of incorrect filter initialization on pose estimation . . . . . . . . 81
6.3 Commanded platform motion for image processing experiment . . . . . 84
6.4 Estimated trajectory without sensor fusion (odometry alone) . . . . . . 85
6.5 Estimated trajectory using stereo image processing . . . . . . . . . . . 86
List of Tables
2.1a Pose error using odometry (circle, R = 1.75 m), IWC . . . . . . . . . . 13
2.1b Pose error using odometry (circle, R = 1 m), AWC . . . . . . . . . . . 13
3.1 Experimental results for the determination of ¯V o
(straight line motion) 25
3.2 Experimental results for the determination of ¯V o
(circular motion) . . . 26
3.3 Average Kg
for different yaw rates ˙ψ . . . . . . . . . . . . . . . . . . . 33
3.4 Value of ¯V g
for different yaw rates ˙ψ . . . . . . . . . . . . . . . . . . . 34
4.1a EKF Time Update Equations . . . . . . . . . . . . . . . . . . . . . . . 42
4.1b EKF Measurement Update Equations . . . . . . . . . . . . . . . . . . . 42
6.1 Pose error for a 4000 mm straight track, velocity 1 m/s . . . . . . . . . 76
6.2a Pose error for a circular track (R = 1.75 m, 0.3 m/s, IWC) . . . . . . . 78
6.2b Pose error for a circular track (R = 1 m, 0.3 m/s, AWC) . . . . . . . . 78
6.3 CPU time required for localization algorithm (experiment: 60 s) . . . . . . 87
vi
List of Symbols
(O, x, y) absolute coordinate frame
P, P
x, P
y mobile platform coordinate frame
C, C
x, C
y stereo camera coordinate frame
Ak Jacobian matrix of Kalman Filter at time step k
b half the stereo camera baseline
P
b i position of feature i in the platform coordinate frame
Cx horizontal pixel coordinate of image center
Co
∆si
odometry error covariance matrix
Co
∆P x odometry error covariance matrix
dx distance separating two CCD sensor elements
elat lateral position estimate error w.r.t the platform coordinate frame
elong longitudinal position estimate error w.r.t the platform coordinate frame
e x pose estimate error in absolute coordinate frame
f focal length of stereo camera
f (· ) process model of Kalman Filter
idrive transmission ratio from driving motor to wheel revolution angle
isteer transmission ratio from steering motor to wheel steering angle
Jc
Jacobian matrix of stereo reconstruction process
Kg
gyroscope scaling factor
Kk gain factor of Kalman Filter measurement update equations at time step k
l longitudinal wheel base of mobile platform
P
l i position of landmark i in the platform coordinate frame
Pk state covariance matrix of Kalman Filter at time step k
vii
Pdr, k state covariance matrix of dead reckoning process at time step k
Qk process noise covariance matrix of Kalman Filter at time step k
Rk measurement noise covariance matrix of Kalman filter at time step k
Rabs, k rotation matrix from absolute to platform coordinate frame
Rin, k rotation increment matrix of platform motion
ul, ur horizontal image plane coordinates of left and right cameras
ul, M , ur, M horizontal image pixel coordinates of left and right cameras
vi velocity of wheel i at contact point with floor (i ∈ {1, 2, 3, 4})
vi, x, vi, y velocity components of wheel i in platform coordinate frame
˙V g
time based noise variance basis of gyroscope
¯V g
yaw rate based noise variance basis of gyroscope
vk measurement noise of Kalman Filter at time step k
¯V o
odometry variance basis
V c
l , V c
r pixel based variance of left and right cameras
w lateral wheel base of mobile platform
wk process noise of Kalman Filter at time step k
Wk Jacobian matrix of Kalman Filter at time step k
x pose vector of mobile platform in absolute coordinate frame
˙x velocity vector of mobile platform in absolute coordinate frame
xa
augmented state vector including landmark locations
P
˙x velocity vector of mobile platform in platform coordinate frame
xk state vector of Kalman Filter at time step k
ˆx−
k a priori state estimate of Kalman Filter at time step k
ˆxk a posteriori state estimate of Kalman Filter at time step k
xt
position measurement vector of magnetic tracking device
P
xi, P
yi position of wheel module i in platform coordinate frame (i ∈ {1, 2, 3, 4})
zk measurement vector of Kalman Filter at time step k
∆s actual traveled distance of mobile platform in one sample time interval
∆si actual traveled distance of wheel i (i ∈ {1, 2, 3, 4})
∆so
i traveled distance of wheel i measured by odometry (i ∈ {1, 2, 3, 4})
∆tg
gyroscope sample time interval
∆to
odometry sample time interval
∆x k pose increment vector of mobile platform in absolute coordinate frame
∆P
x k pose increment vector of mobile platform in platform coordinate frame
ϕi steering angle of wheel i (i ∈ {1, 2, 3, 4})
∆ψg
orientation increment measured by gyroscope
˙ψg
drift gyroscope drift
Chapter 1
Motivation and Problem Definition
Generally speaking the motion of an autonomous wheeled mobile robot features three
degrees of freedom: its position in a plane (two d.o.f.) and its orientation. In order for
the robot to successfully accomplish its mission a precise information about its pose
(position and orientation) is required.
Various sensors can be used to gather information about the robot’s pose. Inter-
nal sensors on the robot provide information about its internal state (wheel angles or
angular velocities, yaw rate, . . . ). Examples of internal sensors are wheel encoders, a
gyroscope or an accelerometer. Localization methods based on these types of sensors
are known as dead reckoning methods. Internal sensors usually present a relatively
high accuracy and sample rate. The principle of dead reckoning implies that position
increments are used to update an absolute position. Due to the nature of this process,
measurement errors and inaccuracies accumulate and increase the position uncertainty
with traveled distance. External sensors provide information about the robot’s envi-
ronment. Examples of external sensors are a laser range finder, a magnetic tracker or
a global positioning system (GPS). Usually, the information returned by external sen-
sors is an absolute position with respect to the environment. External sensors generally
present a relatively low accuracy and sample rate.
The basic idea of a sensor fusion algorithm is to maximize the likelihood of a
quantity (e.g. position) given redundant or complementary measurements originating
1
CHAPTER 1. MOTIVATION AND PROBLEM DEFINITION 2
from different sensor devices. It is thus possible to combine the advantages of both types
of sensors: absolute position updates are periodically merged with the highly accurate
position information of a dead reckoning algorithm, thus resetting the growing position
uncertainty.
This work focuses on the development of a position estimator for an omnidirectional
wheeled mobile platform. The objective is to determine the pose of the mobile platform
as accurately as possible using multiple sensor devices. The term pose designates the
position as well as the orientation. The sensors available for this task are the following:
• wheel encoders
• a gyroscope
• a magnetic tracker
• a stereo camera
The disposition of these devices on the mobile platform can be seen in Figure 1.1.
It shall be emphasized that this is an omnidirectional platform, since independent
steering angles and wheel velocities can be assigned to each of the four wheel modules.
This enables decoupled motion in the three available degrees of freedom: longitudinal
velocity, lateral velocity and yaw rate.
This report is organized as follows: Chapter 2 establishes the mathematical de-
scription of the platform kinematics and its numerical implementation. Chapter 3
establishes mathematical error models for the various sensors in use on the mobile
platform. Chapter 4 presents the sensor fusion strategy used. Chapter 5 introduces
some techniques for improved localization using image pairs provided by a stereo cam-
era. Chapter 6 gives an overview of the conducted experiments and simulations and
their results. An assessment of the results and an outlook of the future work to be
done are given in Chapter 7.
CHAPTER 1. MOTIVATION AND PROBLEM DEFINITION 3
Stereo Camera
Magnetic Tracker Receiver
Wheel module
Gyroscope
Figure 1.1: Mobile platform
Chapter 2
Kinematical Model
O
x
y
ψ
WM 1
WM 2
WM 3
WM 4
2l
2w
Px
P
y
P(x,y)
2rw
ϕ2
ϕ1
ϕ3
ϕ4
Figure 2.1: Schematic top view of the mobile platform
A schematic view of the mobile platform containing the most important parameters for
the kinematical model is shown in Figure 2.1. The wheel modules are labelled WM1
to WM4.
4
CHAPTER 2. KINEMATICAL MODEL 5
Generally, the use of a kinematic model involves the relations between the coordi-
nates in two different spaces. A vector q in joint space is transformed into a vector x
in Cartesian space. In the case of the presented mobile platform, the angular veloci-
ties of the wheels are transformed into the linear velocities and the yaw rate (angular
velocity around the vertical axis) of the robot in the inertial coordinate frame. We will
subdivide this procedure into two separate steps:
• transformation of wheel velocities vi (i ∈ {1, 2, 3, 4}) into platform velocities P
˙x
in the platform coordinate frame P, P
x, P
y . This operation will be dealt with
in Section 2.1.
• transformation of platform velocities P
˙x in the platform coordinate frame into
platform velocities ˙x in the inertial coordinate frame (O,x,y). This operation will
be dealt with in Section 2.2.
The platform velocity vectors are given by P
˙x = P
˙x P
˙y ˙ψ
T
and ˙x = ˙x ˙y ˙ψ
T
and include the yaw rate ˙ψ.
2.1 Computing Incremental Position Updates
The mobile platform used in this work is equipped with four wheel modules, each of
which is fitted with two electric motors. One of the electric motors commands the
steering angle and the other one the revolution speed of the wheel. This amounts
to eight independent inputs available to command the platform motion. Since the
platform has only three degrees of freedom in the absolute coordinate frame (x, y and
ψ), the system of equations governing the platform motion is overdetermined. An
exact solution can only be found if the imaginary lines passing through the wheel
revolution axes intersect in one and the same point. If the steering angles of the four
wheel modules are not perfectly coordinated this condition is violated, as illustrated in
Figure 2.2. As a result wheel slip will occur. The center of the circle arc on which the
platform center point is moving will be somewhere in the region spanned by all wheel
CHAPTER 2. KINEMATICAL MODEL 6
Figure 2.2: Inadmissible wheel configuration
axes intersection points. A similar situation is encountered when the revolution speeds
of the individual wheels are not perfectly coordinated.
This redundancy needs to be taken into account when developing a kinematical
model of the mobile platform. Two possible methods for solving this problem are
proposed here. These methods are described in the next sections.
2.1.1 Least Squares Solution
Given a platform motion in its own coordinate frame P
˙x = P
˙x P
˙y ˙ψ
T
the cor-
responding wheel velocity components of wheel i (i ∈ {1, 2, 3, 4}) are given by the
following set of equations:
vi, x = P
˙x − ˙ψ · P
yi
vi, y = P
˙y + ˙ψ · P
xi (2.1)
CHAPTER 2. KINEMATICAL MODEL 7
ϕi
Pvi
Pvi,x
Pvi,y
Px
Py
(Pxi,Pyi)
Here vi, x and vi, y denote the wheel velocity components
in the platform coordinate frame, P
xi and P
yi denote
the coordinates of the contact point of the wheel with the
floor in the platform coordinate frame. This is illustrated
in the adjacent figure. In order to compute the platform
motion given the velocity components of each wheel, the
system of eight equations obtained by concatenating the systems of equations (2.1) for
all wheels must be solved for P
˙x, P
˙y and ˙ψ. We define matrix M as follows:
M =




















1 0 −P
y1
0 1 P
x1
1 0 −P
y2
0 1 P
x2
1 0 −P
y3
0 1 P
x3
1 0 −P
y4
0 1 P
x4




















The system of equations (2.1) can now be written as follows:
v = M P
˙x (2.2)
with v = v1, x v1, y . . . v4, x v4, y
T
Because M is not a square matrix, we cannot solve system (2.2) for the unknown P
˙x by
matrix inversion. System (2.2) is said to be overdetermined, which was to be expected
considering the kinematical redundancy discussed above. Left multiplying both sides
of equation (2.2) by MT
, then by MT
M
−1
, we obtain:
P
˙x = MT
M
−1
MT
v (2.3)
MT
M
−1
MT
is the pseudoinverse of matrix M. The solution P
˙x minimizes the
residue of each equation contained in the system (2.1) in a least square sense.
CHAPTER 2. KINEMATICAL MODEL 8
2.1.2 Differential Drive Solution
For a differential drive robot with two wheels, the kinematical equations are given by:
P
˙x =
(vr + vl)
2
P
˙y = 0
˙ψ =
(vr − vl)
2w
(2.4)
2w
Px
Py vl
vr
Here, vr and vl are the linear velocities of the right and
left wheel respectively and 2w is the lateral wheelbase, as il-
lustrated in the adjacent figure. For the omnidirectional mo-
bile platform, a similar procedure can be applied: the velocity
along the platform x-axis (y-axis) is computed as the average
x-velocity (y-velocity) of the individual wheels. The yaw rate
is the average of the yaw rates computed by grouping the wheel modules two by two:
P
˙x =
v1, x + v2, x + v3, x + v4, x
4
P
˙y =
v1, y + v2, y + v3, y + v4, y
4
˙ψ =
1
4
v1, x − v2, x
2w
+
v2, y − v3, y
2l
+
v4, x − v3, x
2w
+
v1, y − v4, y
2l
(2.5)
Here 2l and 2w are the distances separating the individual wheel modules as illustrated
in Figure 2.1. For numerical implementation it is necessary to derive a discrete time
version of these equations. Let ∆to
designate the sample time interval of the odome-
try and let us assume that the wheel velocities vi and the steering angle rates ˙ϕi are
constant. We write the system of equations (2.5) at time step k:
CHAPTER 2. KINEMATICAL MODEL 9
∆P
xk =
1
4
·
4
i=1
(λi · vi, x · ∆to
)
∆P
yk =
1
4
·
4
i=1
(λi · vi, y · ∆to
)
∆ψk =
1
4
λ1 · v1, x · ∆to
− λ2 · v2, x · ∆to
2w
+
λ4 · v4, x · ∆to
− λ3 · v3, x · ∆to
2w
+
+
λ2 · v2, y · ∆to
− λ3 · v3, y · ∆to
2l
+
λ1 · v1, y · ∆to
− λ4 · v4, y · ∆to
2l
(2.6)
with
λi =
sin
∆ϕi, k
2
∆ϕi, k
2
The adjustment factor λi accounts for the fact that the path driven by the wheel
during one sampling time interval is a circle arc and not a line segment. The steps
leading from (2.5) to (2.6) are discussed in Section 2.2 for a similar set of equations.
2.1.3 Experimental Comparison of Both Solutions
The redundancy resolution methods presented above are evaluated for different trajec-
tories of the mobile platform. For a circular trajectory the end position and orientation
of the platform are compared to the end position and orientation estimated by odom-
etry data. The resulting pose errors is averaged on a series of ten runs. The estimated
trajectory using both algorithms is shown in Figure 2.3a and Figure 2.3b on page 11
(dd: differential drive, lsq: least squares). The estimated platform orientation using
both algorithms is shown in Figure 2.4a and Figure 2.4b on page 12, along with the
orientation obtained by integrating the gyroscope measurements. It can be seen that
the least squares solution approximates the gyroscope data better. The average pose
error for ten runs on a circle of radius R = 1.75 m is given in Table 2.1a. It is clear that
the least squares algorithm performs best on this type of trajectory. By monitoring the
steering angle of each wheel it turned out that the trajectory was completed with an
inadmissible wheel configuration (IWC). To ensure an admissible wheel configuration
(AWC), a similar path was driven while setting the wheels to the required steering
CHAPTER 2. KINEMATICAL MODEL 10
angles for the projected path without having to rely on the wheel steering angle con-
trollers. The results are shown in Table 2.1b for a circle of radius R = 1 m. In this
situation both algorithms yield equivalent results. Similar observations were made for
straight line trajectories.
It must be concluded that the least squares algorithm is more likely to provide better
pose estimates than the differential drive algorithm in the case of non admissible wheel
configurations, which are typically encountered on trajectories including curves and
rapid changes in direction of motion. It should also be pointed out that it is planned
to reduce the occurrence of inadmissible wheel configurations by using an improved
control structure for the wheel modules. The subsequent developments in this thesis
are based on the differential drive solution, as it allows an analytical analysis of the
odometric uncertainty propagation. The advantages of the least squares solution tend
to vanish anyway when sensor fusion is performed.
CHAPTER 2. KINEMATICAL MODEL 11
−2 −1 0 1 2 3
−2.5
−2
−1.5
−1
−0.5
0
0.5
1
1.5
2
x [m]
y[m]
ex
: 0.094 m
ey
: 0.598 m
Platform position in absolute coordinate system
start position
estimated end position
observed end position
estimated path (odometry)
Figure 2.3a: Estimated trajectory using dd solution
−2 −1 0 1 2 3
−2.5
−2
−1.5
−1
−0.5
0
0.5
1
1.5
2
x [m]
y[m]
ex
: 0.024 m
ey
: 0.327 m
Platform position in absolute coordinate system
start position
estimated end position
observed end position
estimated path (odometry)
Figure 2.3b: Estimated trajectory using lsq solution
CHAPTER 2. KINEMATICAL MODEL 12
0 10 20 30 40 50 60
0
90
180
270
360
time [s]
ψ[°]
e
ψ
: 19.2°
Platform orientation
observed end orientation
estimated end orientation
gyro data
Figure 2.4a: Estimated orientation using dd solution
0 10 20 30 40 50 60
0
90
180
270
360
time [s]
ψ[°]
e
ψ
: 10.6°
Platform orientation
observed end orientation
estimated end orientation
gyro data
Figure 2.4b: Estimated orientation using lsq solution
CHAPTER 2. KINEMATICAL MODEL 13
Method Direction elong [mm] elat [mm] eψ [◦
]
Mean
counterclockwise 622 -96 19.8
clockwise 328 -47 -9.8
Least squares
counterclockwise 350 -24 11.2
clockwise 97 -9 -2.4
Table 2.1a: Pose error using odometry (circle, R = 1.75 m), IWC
Method Direction elong [mm] elat [mm] eψ [◦
]
Mean
counterclockwise 57 -3 3.6
clockwise -26 3 1.5
Least squares
counterclockwise 49 -2 3.1
clockwise -32 3 1.8
Table 2.1b: Pose error using odometry (circle, R = 1 m), AWC
CHAPTER 2. KINEMATICAL MODEL 14
2.2 Absolute Position Updating
In the following analysis, the time dependency of the motion variables is explicitly
indicated. Considering the omnidirectional nature of the mobile platform, the mathe-
matical model for the platform motion is given by the following equations:
˙x(t) = P
˙x(t) · cos(ψ(t)) − P
˙y(t) · sin(ψ(t))
˙y(t) = P
˙x(t) · sin(ψ(t)) + P
˙y(t) · cos(ψ(t))
(2.7)
Here x(t) and y(t) denote the absolute platform positions and ψ(t) denotes the
platform orientation. Together, these three variables define what is called the platform
pose. As in the preceding section, P
˙x(t) and P
˙y(t) designate the platform velocities in
its own coordinate frame and are computed according to the system of equations (2.5).
The general analytical solution to this system of differential equations is difficult to
compute. It is thus assumed that both the translational and the angular velocities are
constant over the integration period. In the following equations this is indicated by
the omission of the time dependency for these variables. Note that the assumption of
constant velocities is akin to saying that the platform moves on a trajectory of constant
curvature with constant velocity in the integration interval. Under this assumption the
solution of (2.7) is written as:
x(t) = x0 +
t
0
˙x(t)dt
= x0 +
P
˙x
˙ψ
sin(ψ0 + ˙ψ · t) − sin(ψ0) +
P
˙y
˙ψ
cos(ψ0 + ˙ψ · t) − cos(ψ0)
y(t) = y0 +
t
0
˙y(t)dt
= y0 −
P
˙x
˙ψ
cos(ψ0 + ˙ψ · t) − cos(ψ0) +
P
˙y
˙ψ
sin(ψ0 + ˙ψ · t) − sin(ψ0)
ψ(t) = ψ0 + ˙ψ · t
Using trigonometric identities we can rewrite this system of equations in the following
CHAPTER 2. KINEMATICAL MODEL 15
way:
x(t) = x0 + P
˙x · t ·


sin
˙ψt
2
˙ψt
2

 · cos ψ0 +
˙ψt
2
− P
˙y · t ·


sin
˙ψt
2
˙ψt
2

 · sin ψ0 +
˙ψt
2
y(t) = y0 + P
˙x · t ·


sin
˙ψt
2
˙ψt
2

 · sin ψ0 +
˙ψt
2
+ P
˙y · t ·


sin
˙ψt
2
˙ψt
2

 · cos ψ0 +
˙ψt
2
ψ(t) = ψ0 + ˙ψ · t
Note that the particular case of a straight line motion of the platform is included
since lim
x→0
sin x
x
= 1. For a numerical implementation of this model, we adopt a discretized
version of the motion equation. For time step k the discrete time motion equation of
the is given by:
∆xk = ∆P
xk ·
sin ∆ψk
2
∆ψk
2
· cos ψk +
∆ψk
2
− ∆P
yk ·
sin ∆ψk
2
∆ψk
2
· sin ψk +
∆ψk
2
∆yk = ∆P
xk ·
sin ∆ψk
2
∆ψk
2
· sin ψk +
∆ψk
2
+ ∆P
yk ·
sin ∆ψk
2
∆ψk
2
· cos ψk +
∆ψk
2
(2.8)
In case the platform motion is computed using odometry alone the platform position
increments ∆P
xk, ∆P
yk and ∆ψk are given by the system of equations (2.6). The
numerical integration of (2.8) is performed using the Euler algorithm:
xk+1 = xk + ∆xk
yk+1 = yk + ∆yk
ψk+1 = ψk + ∆ψk
(2.9)
The systems of equations (2.8) and (2.9) can be combined using vector notation:
x k+1 = x k + Rabs, k · ∆P
x k (2.10)
with
x k = xk yk ψk
T
∆P
x k = ∆P
xk ∆P
yk ∆ψk
T
CHAPTER 2. KINEMATICAL MODEL 16
Rabs, k =





cos ψk + ∆ψk
2
− sin ψk + ∆ψk
2
0
sin ψk + ∆ψk
2
cos ψk + ∆ψk
2
0
0 0 1





Chapter 3
Sensor Error Models
In order to perform sensor fusion, information about the accuracy of the different
sensors in use must be obtained. The measurement error Z can be considered a random
variable and consists of an offset error eoffset and a zero mean noise part Enoise as
described by the following equation:
Z = eoffset + Enoise (3.1)
The noise term is usually considered to follow a Gaussian probability distribution
as indicated in Definition 6 in Appendix A, in which case it is referred to as white noise.
In mathematical terms the mean is used to describe the systematic error induced by the
measurement offset whereas the variance is used to describe the intensity of the noise
process. The formulas for computing the mean and variance are given in Definition 1
and Definition 2 in Appendix A. In the case of white noise these parameters are
sufficient to describe the characteristics of the random variable Z.
The following sensors are used for the localization of the mobile platform:
• encoders for the steering and revolution angles of each wheel
• a gyroscope
• a magnetic tracker
17
CHAPTER 3. SENSOR ERROR MODELS 18
• a stereo camera
The rest of this section deals with the modeling of the available sensors and an assess-
ment of their individual accuracies.
3.1 Odometry Error Model
The angular positions of the driving and the steering motors of each wheel module are
measured by angular encoders fitted to the motor shafts. These encoders subdivide
a complete shaft revolution into 500 periods. The gear transmission ratio is equal
to idrive = 30 for the driving motor and isteer = 127.5 for the steering motor. One
wheel revolution is thus subdivided into 15’000 and one steering revolution into 63’750
encoder periods. The encoder values are read out at a rate of 125 Hz. Because of
the high transmission ratio and sampling rate the errors caused by limited encoder
resolution and sampling rate are neglected in subsequent developments.
According to [1], the following model holds for the traveled distance measured by
the wheel encoders:
∆so
i =
1
1 + ǫ
· ∆si + ∆si, noise (3.2)
where ∆si = vi · ∆to
represents the actual distance traveled by wheel i. The first term
on the right hand side of (3.2) takes into account the systematic errors induced by
erroneous wheel radii, wheel misalignment and other systematic error sources. The
second term on the right hand side takes into account non-systematic errors induced
by travel on uneven floors, wheel slippage and other non-systematic error sources.
The wheel steering angle measured by the encoder is best described by the following
model:
ϕo
i = ϕi + ∆ϕi, mis (3.3)
Here ϕi represents the actual steering angle of wheel i and ∆ϕi, mis is the misalignment
CHAPTER 3. SENSOR ERROR MODELS 19
of the wheel module with respect to the platform longitudinal axis. Note that in
model (3.3) it was assumed that the only error present is a systematic one. In fact
the gear train between the steering motor input and the wheel steering angle output
is considered to be perfectly rigid. Unlike the transmission between the driving motor
and the floor where wheel slip can occur, the steering angle is not affected by any non
systematic errors.
In the case of the mobile platform considered here, the systematic errors ǫ and
∆ϕi, mis mentioned above were greatly reduced by calibrating the platform wheel radii
and wheel steering angle offsets. These errors will thus not be treated separately. The
only error taken into account in subsequent developments is the non systematic error
in the wheel traveled distances. This is not to say that there are no systematic errors
anymore. According to [2] systematic errors that cannot be described deterministically
can be treated like random errors by using the second order moment of the error
distribution instead of the variance. The definition of the second order moment is
given in equation (A.1) in Appendix A. This procedure will be applied whenever the
systematic error fails to be completely eliminated by suitable calibration.
3.1.1 Pose Uncertainty Model
Assuming that the irregularities on the floor are independent in size, shape and location,
the following statements can be made:
1. The error in the measured traveled distance of one wheel at sampling time k
is statistically independent of the error in the measured traveled distance at
sampling time k + 1. Next, the distribution of this error is assumed Gaussian.
Although this assumption is not absolutely necessary for this model to be used
in relationship with a Kalman Filter, it nevertheless simplifies matters. If it is
assumed that the random error in the measured traveled distance of wheel i is
proportional to the actual distance traveled by this wheel, the variance of the
error in the traveled distance verifies the following relationship:
CHAPTER 3. SENSOR ERROR MODELS 20
V o
i = ¯V o
· ∆si (3.4)
where
V o
i is the error variance of the traveled distance measured by wheel
module i and
¯V o
is a proportionality coefficient termed the variance basis which will
have to be determined experimentally. Its units are m2
/m.
2. The error in the measured traveled distances of the individual wheels are statis-
tically independent of each other, and thus uncorrelated. The error covariance
matrix Co
∆si
of the measurement vector ∆s1 ∆s2 ∆s3 ∆s4
T
can be as-
sumed to be a diagonal matrix given by equation (3.5).
Co
∆si
= ¯V o
·








∆s1 0 0 0
0 ∆s2 0 0
0 0 ∆s3 0
0 0 0 ∆s4








(3.5)
To understand how the uncertainty in the traveled wheel distances translates into an
uncertainty in the platform pose increments in its own coordinate frame, the error
covariance matrix needs to be projected according to the following equation:
Co
∆P x = J∆P x
∆si
Co
∆si
J∆P x
∆si
T
(3.6)
where J∆P x
∆si
is the Jacobian of the transformation relating wheel traveled
distances to incremental platform poses in the platform coordinate frame
according the system of equations (2.3) on page 7 or (2.6) on page 9:
J∆P x
∆si
=





∂∆P x
∂∆s1
∂∆P x
∂∆s2
∂∆P x
∂∆s3
∂∆P x
∂∆s4
∂∆P y
∂∆s1
∂∆P y
∂∆s2
∂∆P y
∂∆s3
∂∆P y
∂∆s4
∂∆ψ
∂∆s1
∂∆ψ
∂∆s2
∂∆ψ
∂∆s3
∂∆ψ
∂∆s4





(3.7)
As the platform moves along, the uncertainty in the platform pose increments is prop-
agated to yield a pose uncertainty in the absolute coordinate system. This is described
by the following pose error covariance update equation:
Po
(t + ∆t) = J
x(t+∆t)
x(t) Po
(t) J
x(t+∆t)
x(t)
T
+ J
x(t+∆t)
∆P x(t)
Co
∆P x (t) J
x(t+∆t)
∆P x(t)
T
(3.8)
CHAPTER 3. SENSOR ERROR MODELS 21
with the pose error covariance matrix at time t
Po
(t) =





p11(t) p12(t) p13(t)
p21(t) p22(t) p23(t)
p31(t) p32(t) p33(t)





(3.9)
and the Jacobian matrices
J
x(t+∆t)
x(t) =





∂x(t+∆t)
∂x(t)
∂x(t+∆t)
∂y(t)
∂x(t+∆t)
∂ψ(t)
∂y(t+∆t)
∂x(t)
∂y(t+∆t)
∂y(t)
∂y(t+∆t)
∂ψ(t)
∂ψ(t+∆t)
∂x(t)
∂ψ(t+∆t)
∂y(t)
∂ψ(t+∆t)
∂ψ(t)





, (3.10)
J
x(t+∆t)
∆P x(t)
=





∂x(t+∆t)
∂∆P x(t)
∂x(t+∆t)
∂∆P y(t)
∂x(t+∆t)
∂∆ψ(t)
∂y(t+∆t)
∂∆P x(t)
∂y(t+∆t)
∂∆P y(t)
∂y(t+∆t)
∂∆ψ(t)
∂ψ(t+∆t)
∂∆P x(t)
∂ψ(t+∆t)
∂∆P y(t)
∂ψ(t+∆t)
∂∆ψ(t)





(3.11)
The Jacobians (3.10) and (3.11) can be computed using equations (2.8) and (2.9) on
page 15.
Equation (3.8) plays an important role in the Kalman Filter and is used to update
the state uncertainty, i.e. the uncertainty in the estimated platform pose. More on
this subject will follow in Chapter 4.
Now that the arithmetics of the odometric uncertainty propagation have been es-
tablished, a procedure for computing the odometric variance basis ¯V o
can be proposed:
1. Using equation (3.8) an analytical expression for Po
(t + ∆t) is computed using
the kinematical model developed in Chapter 2.
2. By letting ∆t → 0, the differential equation governing the evolution of Po
(t) is
found. This matrix differential equation consists of multiple differential equations
for the matrix entries pij(t).
3. An analytical expression for the pose error covariance matrix entries p11(t), p22(t)
and p33(t) is found by solving the system of differential equations mentioned
above. This solution is likely to exist only for cases involving very simple platform
CHAPTER 3. SENSOR ERROR MODELS 22
trajectories (e.g. a straight line). These analytically computed position and
orientation variances are then solved for the parameter ¯V o
.
4. A series of experiments involving simple platform trajectories will yield an ap-
proximate value for the position and orientation variances mentioned above and
thus enable one to compute an approximate value for ¯V o
.
5. By injecting ¯V o
in a simulation model, it is possible to compare simulated and
real position and orientation variances for different platform trajectories. The
comparison of simulated and real variances will yield two important results:
• is the discrete simulation model accurate enough to propagate the pose
uncertainty (discretization error)?
• acceptance or rejection of the proposed odometry model
Note that the proposed procedure requires the kinematic model of the platform to
be fairly simple to avoid troublesome analytical expressions. Therefore, the linear
redundancy resolution scheme discussed in Section 2.1.2 will be used instead of the
more accurate model proposed in Section 2.1.1. The arithmetics involved in the method
presented are derived in [2] for the simpler case of a two wheeled differential drive robot.
Here, we will simply present the results for the case of a four wheeled omnidirectional
mobile robot with the proposed odometry model.
3.1.2 Analytical Expression for ¯V o
The procedure described above was carried out for the case of the platform moving
on a straight line trajectory along the platform longitudinal P
x-axis. The analytical
expression for the odometry variance basis is as follows:
CHAPTER 3. SENSOR ERROR MODELS 23
¯V o
=
4 · p11
∆s
=
48 · w · p22
∆s3
=
16 · w2
· p33
∆s
(3.12)
with
p11 : variance of platform position along platform P
x-axis
p22 : variance of platform position along platform P
y-axis
p33 : variance of platform orientation
w : half the platform lateral wheel base according to Figure 2.1
∆s : traveled distance of the platform center point P
Some characteristics of the derived formulas should be noted:
• the variance of the position along the platform longitudinal P
x-axis and of the
orientation is proportional to the traveled distance
• the variance of the position along the platform lateral P
y-axis is proportional to
the third power of the traveled distance.
• none of the variances includes any time dependency
The second item reflects the fact that the lateral position uncertainty will be influ-
enced by the orientation uncertainty. The third item means that the experimentally
measured variances should be independent of the velocity with which the trajectory is
driven. Note also that the platform orientation variance will decrease as w increases,
which illustrates the stabilizing effect of a large wheel base.
3.1.3 Experimental Results and Analysis
Straight line motion The mobile platform was driven along a straight line trajec-
tory. There were three available tracks of 4000 mm each and one track of 8000 mm. To
begin with, the commanded platform velocity was set to 0.2 m/s. For the shorter track,
CHAPTER 3. SENSOR ERROR MODELS 24
a series of ten runs was distributed on the three available tracks. A series of ten runs was
also performed on the long track. After each run, the exact position and orientation of
the mobile platform was measured. Recorded data from the wheel modules were then
used to estimate the performed trajectory according to the algorithm presented in Sec-
tion 2.1.2 on page 8. The pose error was computed according to the following equation:
ex = x m
end − x o
end
= ex ey eψ
T (3.13)
where
x m
end is the actual (measured) end pose and
xo
end is the estimated end pose using odometry data.
Note that the pose error is measured in the absolute coordinate frame. The units
are [m] for the position error and [rad] for the orientation error. For each series of
ten runs the sample average, variance, standard deviation and second order moment of
the pose error were computed according to the formulas in Appendix A. The shorter
tracks were also covered with velocities of 0.6 m/s and 1.0 m/s (ten runs each).
The results are presented in Table 3.1 on page 25. Despite careful calibration of the
wheel radii and steering angles to minimize the systematic pose errors, a substantial
systematic error remains, as indicated by non-zero mean values of the position and
orientation errors. This is why the variance basis ¯V o
is computed using the second
order moment m2 instead of the variance s2
. Table 3.1 also lists the entries of the
main diagonal of the pose uncertainty matrix obtained by numerical simulation using
odometry data and the variance basis ¯V o
obtained from the lateral position error.
These values are averaged over 3 simulation runs.
The variance basis ¯V o
obtained from the lateral position and orientation errors
yields similar results for every set of measurements (1.065e − 04 < ¯V o
< 1.436e − 04)1
.
However, the variance basis computed with the longitudinal position error is up to one
1
The lower variance basis ¯V o
for the 8000 mm track is partly due to the fact that all runs were
performed on the same track, thus lowering the spread in the pose error.
CHAPTER3.SENSORERRORMODELS25
Track length [mm] Velocity [m/s] ¯e x s2
(e x) s(e x) m2(e x) ¯V o
Simulated variances
4000 0.2
-4.980e-03 1.236e-05 3.515e-03 3.593e-05 3.438e-05 1.334e-04
6.874e-02 6.996e-04 2.645e-02 5.355e-03 1.271e-04 5.366e-03
2.705e-02 9.078e-05 9.528e-03 8.132e-04 1.124e-04 9.192e-04
4000 0.6
-2.284e-03 1.843e-05 4.293e-03 2.180e-05 2.078e-05 1.514e-04
7.542e-02 4.110e-04 2.027e-02 6.057e-03 1.419e-04 6.043e-03
2.411e-02 2.555e-04 1.599e-02 8.110e-04 1.116e-04 1.030e-03
4000 1.0
-3.343e-03 2.242e-05 4.735e-03 3.136e-05 2.988e-05 1.512e-04
7.155e-02 1.120e-03 3.346e-02 6.127e-03 1.436e-04 6.026e-03
2.824e-02 1.849e-04 1.360e-02 9.641e-04 1.327e-04 1.033e-03
8000 0.2
1.365e-04 2.077e-05 4.558e-03 1.871e-05 9.350e-06 2.381e-04
1.753e-01 8.977e-04 2.996e-02 3.154e-02 1.065e-04 3.270e-02
4.265e-02 4.360e-05 6.603e-03 1.858e-03 1.341e-04 1.494e-03
Table 3.1: Experimental results for the determination of ¯V o
(straight line motion)
CHAPTER3.SENSORERRORMODELS26
counterclockwise clockwise all
¯e x,ccw s2
ccw(e x) ¯e x,cw s2
cw(e x) ¯e x m2(e x) Simulated variances (¯V o
= 1.4 · 10−4 m2
m
)
5.675e-02 3.122e-04 -2.610e-02 1.782e-04 1.532e-02 2.135e-03 2.276e-03
-3.190e-03 2.483e-07 2.754e-03 7.292e-06 -2.180e-04 1.171e-05 6.894e-04
6.214e-02 3.417e-04 2.641e-02 9.613e-05 4.427e-02 2.444e-03 1.558e-03
Table 3.2: Experimental results for the determination of ¯V o
(circular motion)
CHAPTER 3. SENSOR ERROR MODELS 27
order of magnitude smaller (9.350e − 06 < ¯V o
< 3.438e − 05). This results might
indicate that the assumption of a diagonal error covariance matrix for the traveled
distance of the wheel modules as expressed in equation (3.5) on page 20 does not hold
true. Nevertheless, the model seems to describe the lateral position and orientation
uncertainty relatively accurately, which is why it will not be rejected at this stage.
Additionally, the results confirm that there is no significant dependence of ¯V o
on the
velocity with which the path is driven.
Comparing the measured second order moment m2 of the position and orientation
errors to the simulated variances, it can be seen that the measured second order mo-
ment of the lateral position error is very well approximated by numerical simulation,
which validates our analytical results and confirms that the discretization and roundoff
errors in the numerical model are acceptable2
. The simulated orientation error vari-
ance is also in acceptable proximity to the measured orientation second order moment
m2. As expected the simulated longitudinal position error variance is much larger than
than the measured second order moment. A typical simulated standard deviation in
the longitudinal position error is approx. 13 mm as opposed to approx. 5 mm for the
measurements. As a matter of consequence, the chosen strategy will systematically un-
derestimate the longitudinal precision of the odometry data, while correctly weighting
the lateral position and orientation uncertainty.
The implemented odometry variance basis is:
¯V o
= 1.4 · 10−4 m2
m
Note that this value is approximately the mean of the values for ¯V o
obtained from
the lateral position error. The reason for this is that the platform position measure-
ments are more reliable than the orientation measurements.
Circular motion To verify that the proposed odometry model holds true for other
trajectories, the platform was driven on a circle of radius R = 1 m with a velocity
2
The variance basis ¯V o
used for simulation is exactly the one obtained from the respective lateral
position error second order moment.
CHAPTER 3. SENSOR ERROR MODELS 28
of 0.3 m/s (revolution time of 21 s). To avoid inadmissible wheel configurations, all
wheel modules were preset to the required steering angle for the projected trajectory.
Of the ten runs performed, five were in a counterclockwise and five in a clockwise sense.
The results are shown in Table 3.2 on page 26.
It can be seen that the sign of the average position error changes with the sense
of rotation and the orientation error average also differs significantly. This is why
one should again consider the second order moment m2 of the pose error instead of
the sample variance s2
. A comparison of this quantity with the simulated pose error
variance yields:
• the longitudinal position uncertainty is very well approximated
• the lateral position uncertainty is overestimated by a factor of almost 60
• the orientation uncertainty is underestimated by a factor of almost 1.5
These results confirm that the hypothesis of uncorrelated traveled distances of the
individual wheel modules is not justified. In conclusion, one can say that this model
in combination with the implemented value of ¯V o
yields a conservative estimate of
the pose uncertainty, which is either determined correctly or overestimated in most
analyzed cases.
It should be noted that the same trajectory driven with inadmissible wheel con-
figurations (the steering angles are commanded by the controller) yields experimental
results that match the proposed model even less. This was to be expected as the re-
dundancy resolution algorithm (2.6) on page 9 is a gross approximation of the platform
motion for inadmissible wheel configurations as noted in Section 2.1.3.
3.2 Gyroscope Error Model
The optical fibre gyroscope (OFG) mounted on the mobile platform measures the
platform yaw rate ˙ψ through optical interference. According to [1], the OFG is best
CHAPTER 3. SENSOR ERROR MODELS 29
described by the following model:
˙ψg
= Kg
· ˙ψ · (cos α + δ1) + δ2 + δnoise (3.14)
with
˙ψg
: measured yaw rate
Kg
: scaling factor
˙ψ : effective platform yaw rate
α : angle of platform yaw axis to sensor coil
δ1 : factor related to the internal stochastic variation of the scaling factor
δ2 : gyroscope drift error (offset)
δnoise : zero mean white noise
We will assume α = 0. Note that the distance separating the gyroscope from the
platform yaw axis has no influence on the measured yaw rate since the device does not
register translational motion.
The orientation increment measured by the gyroscope during one sample time in-
terval ∆tg
is given by the following discrete time model:
∆ψg
= Kg
· ∆ψ + ∆ψg
drift + ∆ψg
noise (3.15)
with
∆ψg
: measured orientation increment
Kg
: scaling factor, cf. (3.14)
∆ψ : actual platform orientation increment
∆ψg
drift : gyroscope drift error (offset)
∆ψg
noise : zero mean white noise incl. the previously mentioned δ1 and δnoise
The noise components at two different sample times ∆ψg
noise(t1) and ∆ψg
noise(t2) are
assumed to be statistically independent. The noise variance is considered to be made
CHAPTER 3. SENSOR ERROR MODELS 30
up of two different components:
V g
= ˙V g
· ∆t + ¯V g
· |∆ψ| (3.16)
1. The first term on the right hand side of (3.16) is independent from the yaw rate
and increases linearly with time (time-based variance).
2. The second term is independent from time but increases linearly with the ori-
entation increment during one sample interval (yaw rate-based variance). This
part of the gyroscope uncertainty is due to the internal stochastic variation of
the scaling factor Kg
.
3.2.1 Systematic and Non Systematic Errors
As with any measurement process, the gyroscope measurements are subject to system-
atic and stochastic errors. Considering the model (3.15) combined with (3.16), we can
classify the model parameters into one of these categories:
1. Systematic errors: Kg
(if Kg
= 1), ∆ψg
drift.
2. Stochastic errors: ∆ψg
noise consisting of a time-based noise and a yaw rate-based
noise term
As with the odometry, we will attempt to reduce the systematic errors to a minimum
by calibrating the gyroscope. Subsequently, a series of measurements will enable us to
determine the parameters of the gyroscope model.
3.2.2 Experimental Determination of ˙ψg
drift and ˙V g
Procedure The gyroscope data are recorded during nine hours while the gyroscope
is standing still. Before the gyroscope reaches operating temperature, its drift and
the noise corrupting the measurements are subject to a slight variation. This is why
the data recorded during the first hour were discarded. The rest of the data was
CHAPTER 3. SENSOR ERROR MODELS 31
divided into eight sequences of one hour each. For each sequence Si (i = 1, ..., 8), linear
regression was performed according to the following model:
Ωi = αi + βi · ti + εi (3.17)
where Ωi is a random variable representing the measured yaw rate at sampling time
ti and εi is a residue term for which εi is statistically independent of εj (i = j). The
residue is again assumed to be zero mean white noise:
εi
∼= N 0, σi
2
For each sequence, the following computations are performed:
1. The least squares estimators ˆαi, ˆβi, ˆσi
2
of the linear regression model parameters
in (3.17) are computed, σi
2
being the variance of the residue term εi
2. If the gyroscope drift ˙ψg
drift can be assumed constant for for this particular se-
quence (i.e. ˆβ << 1) it is set equal to ˆα. The gyroscope time-based variance
˙V g
is set equal to the residue variance estimator ˆσi
2
multiplied by the gyroscope
sampling interval ∆tg
(8 ms).
The average value of the time-based noise variance ˙V g
is computed over the eight
sequences and this value is implemented in the existing simulation model. The drift is
measured before each run performed with the mobile platform while it is standing still
and averaged on a time period of 20 seconds.
Results The values of the parameters of interest are given below:
˙ψg
drift,min = −250.7 ◦
/h
ωg
drift,max = −4.6 ◦
/h
ωg
drift,avg = −160.9 ◦
/h
ˆσ2
avg = 1.184 · 10−4 rad2
s2
˙V g
= 9.475 · 10−7 rad2
s2
CHAPTER 3. SENSOR ERROR MODELS 32
−0.05 −0.04 −0.03 −0.02 −0.01 0 0.01 0.02 0.03 0.04 0.05
0
5
10
15
20
25
30
35
40
[rad/s]
counts[−]
Figure 3.1: Distribution of drift compensated gyro measurements ( ˙ψ = 0)
Furthermore, it should be reminded that the distribution of the residue ε was as-
sumed zero mean Gaussian. As can be seen from the distribution of the drift compen-
sated gyroscope measurements in Figure 3.1, this assumption is not completely verified.
The corresponding Gaussian probability distribution with the corresponding variance
is plotted in red. However we will proceed with the assumption of white Gaussian noise
as it is detailed enough for the purpose of sensor fusion.
3.2.3 Experimental Determination of Kg
and ¯V g
Determination of Kg
In order to determine the value of the scaling factor Kg
, a
series of circular trajectories were driven (counterclockwise and clockwise sense of ro-
tation). The gyroscope data were recorded and the platform orientation was measured
after each run. For each run, the following factor was computed:
Kg
i =
∆ψg
i
∆ψi
(3.18)
The average value on a series of ten runs yields an estimate of the real scaling
factor. Another series of runs were driven at higher platform velocities to determine if
the scaling factor is dependent on the yaw rate (Kg
= Kg
( ˙ψ)).
CHAPTER 3. SENSOR ERROR MODELS 33
| ˙ψ| = 0.16 rad
s | ˙ψ| = 0.30 rad
s
counterclockwise 0.9734 0.9898
clockwise 0.9832 0.9874
average 0.9783 0.9886
Table 3.3: Average Kg
for different yaw rates ˙ψ
The experimental results are presented in Table 3.3. As can be seen, there is a
slight dependency of Kg
on the yaw rate ˙ψ. An average value of Kg
= 0.9835 will be
assumed and a correction factor of CF = 1.017 will be implemented to compensate for
this systematic error.
Determination of ¯V g
Considering the gyroscope measurement noise model (3.16),
the yaw rate-base variance term is given by the following equality:
¯V g
=
V g
− ˙V g
· ∆tg
|∆ψ|
(3.19)
Again, a series of circular trajectory runs are performed and the orientation error
of each run is computed according to the following equation:
eψ = ψm
end − ψg
end (3.20)
with
ψm
end : actual (measured) end orientation
ψg
end : end orientation according to gyroscope measurements
The sample average and variance of the orientation error for a series of runs with
different yaw rates yield an approximate value for the yaw rate-based variance ¯V g
.
The experimental results are presented in Table 3.4. The average of the orientation
error lies within a tolerance band of 3seψ
, which justifies the use of the orientation
CHAPTER 3. SENSOR ERROR MODELS 34
¯eψ [rad] s2
eψ
[rad2
] seψ
[rad] ¯V g
[rad2
rad ]
˙ψ = 0.16 rad
s
0.0314 0.0057 0.0755 8.989 · 10−4
˙ψ = 0.30 rad
s
0.0077 0.0015 0.0389 2.374 · 10−4
Table 3.4: Value of ¯V g
for different yaw rates ˙ψ
error variance to compute ¯V g
. The same is true if we separate the measurements
into counterclockwise and clockwise series. Note that there is a significant difference
between the variance basis computed for different yaw rates. The implemented value
is chosen as:
¯V g
= 5 · 10−4 rad2
rad
3.3 Magnetic Tracker Error Model
The magnetic tracker used for the localization of the mobile platform tracks the mag-
netic field of an emitter and determines its position and orientation in six degrees of
freedom. The emitter is attached to the mobile platform and its two horizontal posi-
tion coordinates (xt
and yt
) are tracked for the purpose of this thesis. At this point
it should be noted that the coordinate frame of the tracker position measurements is
coincident with the absolute coordinate frame for perfect tracker measurements.
3.3.1 Systematic and Non Systematic Errors
Similarly to the other sensors used, the magnetic tracker provides position measure-
ments which are affected by systematic and non systematic errors. The systematic
errors can be illustrated by a two dimensional graph mapping the lines of constant xt
and yt
coordinates onto the absolute coordinate system, as shown in Figure 3.2. This
graph was produced by recording the position measurements of the magnetic tracker
for a grid of equally spaced points. It is clearly visible that the position measurements
CHAPTER 3. SENSOR ERROR MODELS 35
provided by the tracker are affected by systematic errors, thus generating a set of dis-
torted isolines. This distortion is due to the presence of metal in the lab floor, walls
and ceiling which influences the magnetic field received by the tracking device. The
systematic error depends on the actual position of the emitter in the lab environment.
An analytical description of this systematic error is however difficult to establish.
Figure 3.2: Isolines of the magnetic tracker measurement coordinates
In addition to the systematic errors in the tracker position measurements these are
corrupted by non systematic errors. We again assume this error to be white noise. The
aforementioned properties of the tracker position measurements can be summarized by
the following equalities:
CHAPTER 3. SENSOR ERROR MODELS 36
xt
= xt
yt
T
= T (x) (3.21)
Xt ∼= N xt
, CXt (3.22)
where x is the actual absolute position and with
CXt =


σ2
Xt ρσXt σY t
ρσXt σY t σ2
Y t

 (3.23)
Note that in equation (3.22) the tracker position measurements are written in cap-
ital letters where they represent random variables. The parameter ρ in the tracker
covariance matrix (3.23) is a measure of the correlation between the random variables
Xt
and Y t
.
3.3.2 Calibration
In order to reduce the systematic errors in the tracker position measurements, it is
necessary to implement the following nonlinear inverse transformation:
x = x y
T
= T−1
xt
(3.24)
The resulting mean and covariance matrix of a position measurement calibrated
using (3.24) is given in Theorem 1 in Appendix A where the nonlinear transformation
g is to be replaced by T−1
.
As already mentioned, an analytical expression for T−1
is difficult to find. The pro-
cedure used to approximate this inverse transformation can be summarized as follows:
1. An equally spaced grid of points covering an area 5 m long by 5 m wide is
defined inside the magnetic tracker operating range. The distance separating the
grid points is 0.5 m on the edge of the area and 1m inside the area. The reason for
this is that the quality of the position measurements gets worse with increasing
distance from the receiver situated in the center of the area.
CHAPTER 3. SENSOR ERROR MODELS 37
0 0.5 1 1.5 2 2.5
−1.5
−1
−0.5
0
0.5
1
x [m]
y[m]
tracker before calibration
tracker after calibration
actual path (best estimate)
Figure 3.3: Comparison of tracker measurements before and after calibration
2. The position measurements of the tracker are recorded during 30 seconds for
each of the 49 grid points, the measurement frequency being set to 33 Hz. The
average and the covariance matrix of the position measurements are computed
and stored in a database. Note that the covariance matrix (3.23) is approximated
using the sample variances s2
Xt and s2
Y t . The maximum recorded sample standard
deviation was sXt = 4.02 mm.
3. The inverse coordinate transformation (3.24) is approximated by creating a look-
up table which provides the corrected tracker position measurement by linearly
interpolating between the 49 mapped grid points.
The resulting corrected position measurements are compared to the uncorrected ones
in Figure 3.3 for a circular path of the mobile platform.
It should be noted that there remains a slight systematic error in the tracker posi-
tion measurements. This systematic error cannot be described deterministically. One
of the reasons for this remaining error is of course the linear interpolation procedure
CHAPTER 3. SENSOR ERROR MODELS 38
used. Another reason is the fact that the placement of the probe head on the different
grid points during the mapping operation is affected by an inevitable positioning error.
We assume that the uncertainty of the probe placement on the grid point xg is zero
mean Gaussian with a covariance matrix CXg . The uncertainty of the magnetic tracker
position measurements can be thought of as the sum of two normally distributed ran-
dom variables whose variances add up. Strictly speaking this is not entirely correct,
because one of the uncertainties in the position measurement is caused by a measure-
ment noise process and the other one by a systematic misplacement of the probe head.
During operation, the mobile platform will move across different regions of the mapped
grid, thus transforming the consecutive systematic errors into something similar to a
noise process. It should also be noted that the assumption made is equivalent to com-
puting the covariance matrix by using the second order moment, as has already been
done in preceding sections.
To account for the problem mentioned above, a typical position uncertainty of a few
millimeters for the noisy tracker position measurements was increased by an uncertainty
of 3 cm in both coordinates.
Furthermore, it should be pointed out that the position measurements of the mag-
netic tracking device are received with a slight delay τ. Without countermeasure, the
merging of the absolute position measurements of the tracker with the dead reckoning
position estimates would produce a biased pose estimate. By comparing best position
estimates (odometry fused with gyroscope) on a circular trajectory, the delay τ was
estimated to be around 180 ms. In the perspective of a real time implementation of
the localization algorithm, the delay of the tracker position measurements will have to
be accounted for in the merging operation of the Extended Kalman Filter.
Chapter 4
Sensor Fusion
The process of sensor fusion allows to combine the strengths of different kinds of sensors
by merging their information. The contribution of each sensor is weighted according to
some criteria. In navigation problems this can be used to take advantage of the high
sampling rate of dead reckoning sensors while bounding the growing pose uncertainty
by using absolute position or orientation updates.
In this thesis, the Extended Kalman Filter is used to obtain improved pose estimates
by merging the information provided by multiple sensors. The Kalman Filter is a set of
recursive equations whose purpose is to estimate the state of a system using a process
model on the one hand and measurements on the other hand. The basic Kalman Filter
cycle is shown in the schematic diagram below:
a priori estimatea posteriori estimate
Time update ('predict')
Measurement update ('correct')
39
CHAPTER 4. SENSOR FUSION 40
The state of the system is projected ahead using a process model. These equations
are referred to as the predictor or time update equations. If any measurement becomes
available the state is corrected proportionally to the discrepancy that exists between
the intercepted measurement and the measurement predicted using the state variables
updated by the process model. These equations are termed the corrector or measure-
ment update equations. Section 4.1 treats the arithmetics of the Extended Kalman
Filter in more detail.
4.1 The Extended Kalman Filter
This section shall give a basic overview of the Extended Kalman Filter algorithm and
is taken from [3].
The equations of the standard Kalman Filter have been established in the 1960’s
for linear systems. These formulas were subsequently extended to deal with non linear
processes and measurement relationships. This modified Kalman Filter is referred to as
the Extended Kalman Filter (EKF) and linearizes the process about the current mean
and covariance. In the subsequent developments of this chapter, vector quantities are
not underlined for sake of clarity. The state vector of the process is termed x ∈ ℜn
and the process is governed by the nonlinear stochastic difference equation
xk+1 = f (xk, uk, wk) (4.1)
with a measurement z ∈ ℜm
that is
zk = h (xk, vk) (4.2)
where uk is any driving function and the random variables wk and vk represent the
process and measurement noise respectively. They are assumed to be independent of
each other, white and with normal probability distributions.
p (w) ∼ N (0, Q)
p (v) ∼ N(0, R)
CHAPTER 4. SENSOR FUSION 41
Note that Q and R are covariance matrices. The state and the measurement vectors
can be approximated as
˜xk+1 = f (ˆxk, uk, 0) (4.3)
and
˜zk = h (˜xk, 0) (4.4)
where ˆxk is some a posteriori estimate of the state (from a previous time step k).
To estimate a process with non-linear difference and measurement relationships, we
write new governing equations that linearize an estimate about (4.3) and (4.4)
xk+1 = ˜xk+1 + A (xk − ˆxk) + Wwk, (4.5)
zk = ˜zk + H (xk − ˜xk) + V vk (4.6)
with the Jacobian matrices at time step k
Aij =
∂fi
∂xj
(ˆxk, uk, 0) ,
Wij =
∂fi
∂wj
(ˆxk, uk, 0) ,
Hij =
∂hi
∂xj
(˜xk, 0) ,
Vij =
∂hi
∂vj
(˜xk, 0) .
Note that the Kalman Filter is based on the assumption of white Gaussian noise
with zero mean. Some modifications of the basic equations are necessary to deal with
colored noise. It was already mentioned in Section 3.3.2 that a Gaussian distribution
is not preserved under a nonlinear transformation. The EKF linearizes the process
about the current mean and covariance, such that the random variables governing the
corresponding error process can be assumed zero mean with a Gaussian probability
distribution.
The complete set of EKF equations is given in Table 4.1a and Table 4.1b. The time
update equations project the state and covariance estimates from time step k to time
step k +1. Ak and Wk are the process Jacobians and Qk is the process noise covariance
CHAPTER 4. SENSOR FUSION 42
at time step k. The time update equations correspond to the compounding operation
mentioned in [4]. The measurement update equations correct the state and covariance
estimates with the measurement zk. Hk and Vk are the measurement Jacobians and Rk
is the measurement noise covariance matrix at time step k. The measurement update
equations correspond to the merging operation mentioned in [4].
ˆx−
k+1 = f (ˆxk, uk, 0) (4.7)
P−
k+1 = AkPkAT
k + WkQkWT
k (4.8)
Table 4.1a: EKF Time Update Equations
Kk = P−
k HT
k HkP−
k HT
k + VkRkV T
k
−1
(4.9)
ˆxk = ˆx−
k + K zk − h ˆx−
k , 0 (4.10)
Pk = (I − KkHk) P−
k (4.11)
Table 4.1b: EKF Measurement Update Equations
Note that the covariance projection equation in Table 4.1a has already been ob-
tained when considering the uncertainty propagation of the odometry model in equa-
tion (3.8) in Section 3.1.1.
4.2 Delayed-State Measurement Problem
One of the first steps in designing a Kalman Filter is to choose the variables to be
included in the state vector. In the case considered here a natural choice for the state
variables are the position and orientation of the mobile platform:
x k = xk yk ψk
T
(4.12)
However, the measurements provided by the odometry and the gyroscope cannot
be expressed as functions of these state variables. The measurement equation is of the
CHAPTER 4. SENSOR FUSION 43
form:
zk = h (xk, xk−1, vk) (4.13)
This measurement equation does not fit the standard measurement equation (4.2)
because of the occurrence of xk−1. This problem is common in navigation and is known
as the delayed-state measurement problem and can be accounted for by modifying
the Kalman Filter equations so as to accomodate the xk−1 term. This procedure is
described in [5].
The approach chosen in the present work is to subdivide the fusion process into two
separate steps. This is explained below and illustrated in Figure 4.1.
1. The yaw rate measured by the gyroscope is integrated over one time step to yield
an orientation increment and the result is fused with the pose increment mea-
surements of the odometry. The state vector of this process is chosen to contain
position and orientation increments, such that the delayed-state measurement
problem is avoided.
2. The position measurements of the tracker are fused with the pose estimate ob-
tained by integrating the previously computed pose increments. The state vector
of this process is chosen to contain the absolute platform pose according to (4.12).
Note that the integration of stereo image processing data into the Kalman Filter
structure is slightly more complicated and is explained in Chapter 5.
In the following, the implementation of both subfilters will be discussed further.
4.3 Merging of Incremental Pose Estimates
The state vector involved in the fusion of gyroscope measurements with odometry pose
increments is chosen as:
x k = P
∆xk
P
∆yk ∆ψk
T
(4.14)
Note that the reference frame of the position increments P
∆xk and P
∆yk is the platform
coordinate system.
CHAPTER 4. SENSOR FUSION 44
Time update
(eq. 2.6)
∆si
o, ϕi
o
Measurement update
EKF1 (Dead reckoning)
Time update
(eq. 2.8/2.9)
Measurement update
EKF2 (Absolute pos.)
∆ψg
∆xk
-
∆xk
xt
xk
xk
-
Figure 4.1: Diagram showing the implemented Kalman Filter structure
CHAPTER 4. SENSOR FUSION 45
Time Update The time update is performed according to the equations in Ta-
ble 4.1a. The process model f (·) used to project the state ahead is given by the
system of equations (2.6) on page 9. The system Jacobian matrices Ak and Wk are
computed according to the expressions found in Section 4.1. It should be indicated that
Ak is a zero matrix in this case, because x k+1 does not depend on x k. The covariance
matrix of the process noise Qk is given in (3.5) on page 20 and represents the quality
of the pose increment measurements using the proposed odometry model. Note the
dependence of Qk on the odometry variance basis ¯V o
.
Measurement Update The measurement update is performed according to the
equations in Table 4.1b. The measurement equation is given by:
h(x k, v k) = ∆ψk + vk
= H · xk + vk
where H = 0 0 1
T
and vk is the measurement noise.
The measurement noise variance Rk is a scalar and is given by equation (3.16) on page
30. The term Vk is also a scalar and equals 1.
Note that the measurement update equations are only run through if a gyroscope
measurement is available. The measurement rate is 125 Hz, which approximately equals
the frequency of the odometry measurements.
4.4 Including Absolute Position Updates
The pose increments provided by the previous filter are integrated and the resulting ab-
solute pose is fused with the position measurements supplied by the magnetic tracking
device. In this case, the state vector is chosen as:
x k = xk yk ψk
T
(4.15)
Note that in this case the reference frame is the absolute coordinate frame.
CHAPTER 4. SENSOR FUSION 46
Time Update The time update is performed according to the equations in Ta-
ble 4.1a. The process model f (·) used to project the state ahead is given by the
system of equations (2.9) in combination with (2.8) on page 15. Again, the system
Jacobian matrices Ak and Wk are computed according to the expressions found in Sec-
tion 4.1. The process noise covariance matrix is assembled by making the following
considerations:
1. Since the matrix Qk represents the uncertainty propagated by the process, one
of the components of Qk will be the state covariance matrix Pdr,k computed in
the dead reckoning process upstream of this filter.
2. The system equations (2.8) and (2.9) are derived assuming constant velocity and
yaw rate of the mobile platform. An additional uncertainty of the process is
consequently the error resulting if this condition is violated. Possible values for
the standard deviation in the position and orientation are:
σξ,a =
1
2
aξ,max · ∆t2
(4.16)
where a represents a linear or angular acceleration and ξ ∈ {x, y, ψ}.
Recapitulating, the expression of Qk can be assembled as follows:
Qk = Pdr,k +





σ2
x,a 0 0
0 σ2
y,a 0
0 0 σ2
ψ,a





(4.17)
Measurement Update The measurement update is performed according to the
equations in Table 4.1b. The measurement equation is given by:
h(x k, v k) = xk yk
T
+ v k
= H · xk + vk
where H =


1 0 0
0 1 0

 and vk is the measurement noise.
CHAPTER 4. SENSOR FUSION 47
The measurement noise covariance matrix is a 2 × 2 matrix whose entries are com-
puted by interpolating the covariance matrices of the corresponding grid points mea-
sured during the tracker mapping operation described in Section 3.3.2. The matrix Vk
is the 2 × 2 identity matrix.
The measurement update routine is run through only if a tracker position mea-
surement is available. The measurement rate of the tracker is approximately 30 Hz,
but lower update frequencies can be chosen as well. Note that the slight delay τ in
the tracker measurements has been neglected in this work, but should be addressed in
more detail if very precise positioning is required.
Chapter 5
Improving Localization with a
Stereo Camera
5.1 Approach
A stereo camera offers the possibility to obtain depth images of the environment by
comparing the images provided by its cameras (usually two or three) at the same
point in time. Objects in the field of view can thus be localized with respect to the
stereo camera. This is the subject of Section 5.5. By mounting such a device on the
mobile platform it is possible to localize the platform in the environment provided the
absolute position of the objects detected in a stereo image pair is known. In another
approach used an a priori information of the absolute position of the objects detected
is not indispensable. It consists of tracking these objects in a sequence of image pairs
to periodically correct the pose increments provided by a dead reckoning algorithm.
This is done by adapting the Extended Kalman Filter as explained in Section 5.8,
after assessing the errors involved in stereo image processing in Section 5.6. In the
framework of this thesis five artificial landmarks are attached to the walls inside the lab
environment. These landmarks are used to assess the localization procedures described
above.
48
CHAPTER 5. IMPROVING LOCALIZATION WITH A STEREO CAMERA 49
5.2 Hardware Description
In order to enhance the pose estimation, the mobile platform is equipped with a stereo
camera manufactured by Point Grey Research. A picture of the camera can be seen in
Figure 5.1. Some important specifications are given in the following table:
Stereo Camera Specifications
CCD chips two Sony ICX084 color CCD chips
Image resolution 640 × 480
Frame rate 30 Hz
Baseline 12 cm
Focal length 2 mm
Field of view approx. 100◦
Note that the camera is not operated in color mode for the purpose of this work.
This reduces the amount of data to be transferred between the camera and the on-board
computer. The captured images are gray scale intensity images. In the experiments
performed in the framework of this thesis the camera is operated at a frame rate of
approximately 2 Hz.
5.3 Depth Map Generation
The principle of object localization using a stereo camera is a very familiar one: the
same object will present a horizontal shift between the left and right images. The
amount of shift depends on the distance between the object and the camera. This is
illustrated in Figure 5.2. A disparity image or depth map can thus be established
by computing the pixel shift in a pair of stereo images. Sub-pixel precision can be
reached by comparing intensity values of corresponding pixels. The determination of
the position of an object by analyzing a disparity image is termed the reconstruction
process and the inverse task of determining where an object is mapped onto the image
plane is termed the perception process. The present work focuses mostly on the recon-
CHAPTER 5. IMPROVING LOCALIZATION WITH A STEREO CAMERA 50
12 cm
Figure 5.1: Stereo camera
(a) Left image (b) Right image
Figure 5.2: Disparity in stereo images
struction process, because it is the basis for localizing the platform with respect to its
environment.
5.4 Coordinate Systems
Before going into the description of the stereo camera error model it is necessary to
define the reference coordinate systems involved. The task of localizing the mobile
platform is a two dimensional problem. The determination of the vertical position of
objects with respect to the platform does not provide additional information to be used
for pose estimation. We will thus consider two dimensional coordinate systems in the
reconstruction process, unless otherwise specified. There are three reference coordinate
systems to be considered for the localization problem using stereo image processing:
CHAPTER 5. IMPROVING LOCALIZATION WITH A STEREO CAMERA 51
1. The stereo camera coordinate system C, C
x, C
y
2. The mobile platform coordinate system P, P
x, P
y
3. The absolute coordinate system (O, x, y)
As mentioned earlier, the absolute coordinate system corresponds to the coordinate
system of the tracker position measurements. An illustration of the disposition of
the coordinate systems mentioned above is shown in Figure 5.3. The position of the
landmarks used for improved pose estimation is also indicated.
O
Cx
Cy
P
x
P
y
landmarks
walls
platform
C
P
y
x
Figure 5.3: Coordinate systems in lab environment
When analyzing the perception process, additional coordinate systems attached
to the stereo camera will be considered. These are illustrated in Figure 5.4. The
coordinate frame attached to each camera originates from the piercing point of each
camera, called the optical center. The stereo camera coordinate frame C, C
x, C
y
originates from the center of the baseline separating both cameras.
CHAPTER 5. IMPROVING LOCALIZATION WITH A STEREO CAMERA 52
CR
x
C
y
C
x
CL
y
CL
xC
R
C
L
C
CR
y
Figure 5.4: Stereo camera coordinate systems
5.5 Object Localization
In order to achieve improved pose estimation, the position of objects in the environ-
ment must be determined. In Section 5.5.1 the process of mapping an object given
by its two dimensional coordinates onto the camera image planes will be considered.
This is the perception process mentioned earlier. In Section 5.5.2 the inverse process
of localizing an object with respect to the mobile platform will be discussed. This is
the aforementioned reconstruction process. The problem considered here is two di-
mensional, i.e. the considered object is identified by its x and y coordinates in the
environment. This simplifies both the perception and the reconstruction process. For
an analysis of the equivalent three dimensional problem, the reader is referred to [6].
5.5.1 Stereo Camera Model
The problem of interest here is to understand how an object with position coordinates
C
x = C
x C
y
T
in the stereo camera coordinate frame is mapped onto the image
planes of the left and right cameras. Notice that the term object refers to point features
here, e.g. the center of gravity of the object surface as it is mapped onto the image
plane. The situation is illustrated in Figure 5.5. For convenience, the image planes are
represented in front of the optical centers situated at the same level as the camera lenses.
In the following developments we assume perfect camera geometry. The epipolar lines
of both cameras are perfectly aligned, i.e. their image planes are parallel. The mapping
of the object onto the image planes is assumed to obey a pinhole camera model (no lens
distortion). In the two dimensional case examined here, it is not necessary to consider
CHAPTER 5. IMPROVING LOCALIZATION WITH A STEREO CAMERA 53
Cx(Cx,Cy)
f
CLC
R
CLC
x RC
x
RC
yLC
y
2b
ul ur
pixels
Figure 5.5: Stereo geometry
the whole image plane. Since we are not interested in the vertical coordinate of the
object, it suffices to consider the lines representing the pixel row where the object is
mapped. The formulas for the horizontal image plane coordinates ul and ur according
to a pinhole camera model are given by:
ul =
f · C
x + f · b
Cy
ur =
f · C
x − f · b
Cy
(5.1)
where f is the focal length of the camera and the baseline has length 2b.
The computed image plane coordinates are given with respect to the image center
point, which is the projection of the optical center onto the image plane. To obtain
the pixel coordinates ul,M and ur,M in the computer memory frame, an additional
transformation is necessary:
ul,M =
1
dx
· ul + Cx ur,M =
1
dx
· ur + Cx (5.2)
where dx is the distance separating two sensor elements of the CCD chip in
x direction and Cx is the pixel coordinate of the image center in x direction.
The pixel coordinates of the image center are obtained by the calibration procedure of
the camera and are stored in a calibration file.
CHAPTER 5. IMPROVING LOCALIZATION WITH A STEREO CAMERA 54
5.5.2 Triangulation Equations of the Reconstruction Process
The formulas for the object coordinates are obtained by solving (5.1) for the object
coordinates:
C
x = b · (ul + ur)/(ul − ur)
C
y = 2bf/(ul − ur)
(5.3)
In these formulas it is implicitly assumed that the transformation from pixel coordinates
to image plane coordinates has already been done.
5.6 Systematic and Non Systematic Errors
As for the other sensors considered in this work the localization using stereo image
processing is affected by systematic and non systematic errors. The systematic error
component is usually minimized by camera calibration. Although the camera calibra-
tion parameters are specified by the manufacturer, an additional calibration procedure
was able to reduce the systematic error in the reconstruction process even further.
This procedure is described in Section 5.6.1. An analysis of the non systematic error
encountered in stereo image processing and their effect on the reconstruction process
is given in Section 5.6.2.
5.6.1 Camera Calibration
Camera calibration is the procedure used to determine the values of intrinsic and
extrinsic camera parameters in order to quantify the systematic errors in the mapping
of points onto the camera CCD chip with respect to a pinhole camera model. For a
description of the calibration parameters and an example of a calibration method, the
reader is referred to [6].
The stereo camera used provides rectified images which have also been corrected for
lens distortion and camera misalignment. The intrinsic camera calibration parameters
CHAPTER 5. IMPROVING LOCALIZATION WITH A STEREO CAMERA 55
Cx, f and the extrinsic parameter b are listed in a calibration file.
Despite this factory calibration the need for improved precision in the reconstruction
process was derived from a series of measurements using a pattern of markers positioned
in front of the camera at a well-defined distance. A comparison of the marker positions
computed with the reconstruction formula (5.3) and the real marker positions revealed
systematic errors of up to 20%. Clearly, these errors are too high to achieve increased
precision in the localization of the mobile platform.
In the problem considered here, it is possible to improve the precision of the re-
construction process by an additional calibration. The condition for this is to ensure
that the objects used for improved pose estimation always map onto the same region
of the CCD chip. In the case of landmark detection studied here, this is achieved by
adjusting the height of the camera to fit the height of the artificial landmarks. This
ensures that the center point of the landmarks maps onto a narrow horizontal band of
the CCD chip. Additionally, the vertical pixel coordinate of the landmark center point
is independent of the distance separating the platform from the landmark.
The additional calibration procedure is described below. Notice that the operations
mentioned are performed for both cameras.
1. The stereo camera is placed at a well-defined distance with respect to a calibration
pattern consisting of nine white markers on a black cardboard. The position C
x
of each marker with respect to the stereo camera is known. A picture of the setup
is shown in Figure 5.6.
2. The horizontal pixel coordinate of the center point of each marker is extracted
from a captured camera image by an appropriate algorithm written in Matlab
code. The extracted center point pixel coordinates are compared to the pixel
coordinates computed according to the pinhole camera model (5.1) and (5.2).
Figure 5.7a shows an image of the calibration pattern as grabbed by the right
camera. The theoretical marker center point pixel coordinates are illustrated
together with the extracted ones. Subsequently a quadratic polynomial is fitted
to the extracted center point pixel coordinates as shown in Figure 5.7b.
CHAPTER 5. IMPROVING LOCALIZATION WITH A STEREO CAMERA 56
Stereo camera
Figure 5.6: Camera calibration setup
3. A correction function is determined by inverting the quadratic equation obtained
in the previous step. Each raw pixel coordinate extracted from an image is cor-
rected using this function. The result of the correction applied to the calibration
pattern is illustrated in Figure 5.8, where the reconstructed marker coordinates
are plotted alongside the actual ones. It can be seen that the depth of the markers
with respect to the camera is systematically overestimated by up to 15% before
applying the additional calibration procedure. This systematic error is removed
after applying the correction function.
CHAPTER 5. IMPROVING LOCALIZATION WITH A STEREO CAMERA 57
(a) Ideal and extracted marker center points
0 160 320 480 640
0
160
320
480
640
Horizontal pixel coordinate u [−]
Horizontalpixelcoordinate[−]
u = 3.6e−005*u
pinhole
2
+ 0.98*u
pinhole
+ 3.1
Pinhole camera model
Measured points
Quadratic Approximation
(b) Correction function by curve fitting
Figure 5.7: Comparison of ideal and extracted pixel coordinates in calibration pattern
CHAPTER 5. IMPROVING LOCALIZATION WITH A STEREO CAMERA 58
−2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5
1.3
1.5
1.7
Before additional calibration
C
x [m]
C
y[m]
−2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5
1.3
1.5
1.7
C
x [m]
C
y[m]
After additional calibration
reconstruction algorithm
actual marker positions
Figure 5.8: Reconstructed marker locations
CHAPTER 5. IMPROVING LOCALIZATION WITH A STEREO CAMERA 59
5.6.2 Non Systematic Errors
The perception process responsible for mapping objects in the environment onto the
CCD chip of each camera is affected by random errors. Some of the reasons for these
errors are listed below:
• temperature dependent CCD noise
• varying illumination conditions
• spatial discretization error
• intensity discretization error
• calibration errors
• coordinate transformation errors from (O, x, y) to C, C
x, C
y
Though the last two error sources mentioned are systematic ones, the compounding
of several such error sources tends towards a random distribution. The intensity dis-
cretization error refers to the discretization of brightness into intensity values ranging
from 0 to 255. The spatial discretization error designates the fact that an image is
an array of pixels. The lower the resolution of the image, the higher the spatial dis-
cretization error. The effect of the image discretization in pixels on the reconstruction
process is illustrated in Figure 5.9. The tick marks on the image planes denote pixel
boundaries, and the radiating lines extend these boundaries into space. The position
of the point P can lie anywhere in the region enclosed by the extended pixel bound-
aries. The further the object is from the camera, the more elongated the uncertainty
region becomes. Note that it is also asymmetric and oriented. The uncertainty model
developed below will attempt to approximate the size, shape and orientation of these
regions by uncertainty ellipses. The uncertainty in the reconstruction of an object lo-
cation is obtained by propagating the uncertainty in the horizontal pixel coordinate of
each camera. This procedure is very similar to approaches adopted for other sensors
in Chapter 3. We assume 1D, normally distributed error in the measured horizon-
tal image coordinates and derive the 2D Gaussian distributions describing the error
CHAPTER 5. IMPROVING LOCALIZATION WITH A STEREO CAMERA 60
P
Left camera Right cameraLCx RCx
RCyLCy
ul ur
pixels
uncertainty ellipse
Figure 5.9: Uncertainty in stereo image processing
in the inferred 2D object coordinates. This procedure is described in [7] and will be
recapitulated here.
Let the horizontal image plane coordinates be given by Ul and Ur in the left and
right image, respectively. Consider these as normally distributed random scalars with
means ul = µ (Ul) and ur = µ (Ur) and variances V c
l and V c
r . The mean 2D position of
point a point P is computed by inserting ul and ur in equations (5.3) and the covariance
matrix is determined using the following linear approximation:
CP = Jc
·


V c
l 0
0 V c
r

 · Jc T
(5.4)
where Jc
=


∂C x
∂ul
∂C x
∂ur
∂C y
∂ul
∂Cy
∂ur


The matrix Jc
is the Jacobian matrix of the system of triangulation equations (5.3).
This uncertainty model considers the left and right image coordinates as uncorrelated.
The values for the pixel variances V c
l and V c
r must be determined.
To get an estimate of the random error originating from the first three ranom error
sources mentioned on page 59, the calibration setup illustrated in Figure 5.6 is again
CHAPTER 5. IMPROVING LOCALIZATION WITH A STEREO CAMERA 61
used. A total of 500 frames of the calibration pattern are captured, and the error
variance of the extracted center point coordinates is determined for each of the nine
markers. Here it is implicitly assumed that the algorithm extracting the center point
coordinate of the markers is perfect and that the only error registered is due to the
aforementioned random error sources. The maximum recorded variance amounts to
su
2
= 2.666·10−3
pixel2
. However, as mentioned in [6], this image processing noise can
be neglected compared to the remaining systematic errors in the perception process.
In fact the calibration procedure minimizes the sum of the errors between the real
camera and a pinhole camera model over the whole image plane (or a portion of it).
There are consequently still systematic errors varying depending on the considered
image region. To account for this, the value of the implemented pixel variance is
conservatively increased to V c
l = V c
r = 1 pixel2
.
The effect of the implemented pixel noise variance on the position estimation of
objects in the environment is illustrated in Figure 5.10. The first picture shows the
uncertainty ellipses surrounding two objects situated in front of the mobile platform
equipped with the stereo camera. The size of the uncertainty regions was computed
using equation (5.4). The procedure for computing ellipsoid dimensions from the corre-
sponding covariance matrix is mentioned in [4]. The second picture is a closeup of the
uncertainty regions of objects 1 and 2. The flock of points are the results of a simulation
using the reconstruction algorithm (5.3) and corrupting the ideal pixel coordinates of
the objects by a noise of standard deviation 1 pixel. It can be seen that the elongated
shape of the uncertainty region predicted in Figure 5.9 is only partially described by
the Gaussian model. The reason for this is the linear approximation used in (5.4).
The real distribution is not Gaussian because of the non-linearity of the reconstruction
process (5.3).
CHAPTER 5. IMPROVING LOCALIZATION WITH A STEREO CAMERA 62
Figure 5.10: Uncertainty in object locations through stereo image processing
CHAPTER 5. IMPROVING LOCALIZATION WITH A STEREO CAMERA 63
5.7 Motion Solving Algorithm
The problem considered here is the following:
Given the locations of a set of n points in the platform coordinate frame at
time t and the locations of the same set of n points in the platform coor-
dinate frame at time t′
, determine the motion parameters of the platform
between t and t′
. In other words determine the translation vector t and the
rotation matrix R describing the platform motion from instant t to instant
t′
.
This motion solving algorithm should not be computationally expensive, especially if
it is to be included into the sensor fusion algorithm for improved pose estimation. In
fact the landmark extraction process is already computationally expensive by itself.
An efficient motion solving algorithm is found in [8] and is based on solving an
Orthogonal Procrustes problem. More on the Orthogonal Procrustes problem can be
found in [9]. The algorithm is summarized below. In the subsequent developments,
the point locations are considered affected by measurement errors.
1. Let P = p 1 . . . p n be a matrix containing the n point locations in the platform
coordinate frame at time t and P′
= p 1
′
. . . p n
′
be a matrix containing the n
corresponding point locations in the platform coordinate frame at time t′
. The
relationship between P and P′
is given by the following equation:
P = RP′
+ tJT
+ ET
with
R : unknown rotation matrix
t : unknown translation vector
J : n × 1 vector of ones
E : matrix of residuals
CHAPTER 5. IMPROVING LOCALIZATION WITH A STEREO CAMERA 64
Transposing both sides of this equation yields:
PT
= P′ T
RT
+ JtT
+ E
After renaming some of the variables, the basic equation of the motion solving
algorithm is obtained:
B = ART
+ JtT
+ E (5.5)
2. Compute the column mean vectors of matrices A and B:
¯α = (AJ)/n
¯β = (BJ)/n
These vectors are substracted from matrices A and B:
A∗
= A − J ¯αT
B∗
= B − J ¯β
T
The matrices A∗
and B∗
are input to a standard Procrustes subroutine to compute
the unknown rotation matrix R. This subroutine is explained in more detail in
the next step.
3. Compute the following matrix:
S = A∗T
B∗
Diagonalize ST
S and SST
using a standard single value decomposition routine:
ST
S = V DSV T
SST
= WDSWT
V and W are orthogonal matrices containing the eigenvectors of ST
S and SST
.
Next, the unknown rotation matrix is found by using V and W:
ˆRT
= WV T
(5.6)
When implementing this procedure, a problem might arise concerning the orien-
tation of the eigenvectors contained in V and W. It might be necessary to reflect
some of these eigenvectors to obtain an admissible rotation matrix R. More on
this problem is found in [9].
CHAPTER 5. IMPROVING LOCALIZATION WITH A STEREO CAMERA 65
4. An estimate of the matrix B can now be computed using ˆR, as well as the resid-
ual matrix E:
ˆB = A∗ ˆRT
+ J ¯β
T
E = B − ˆB
Finally, the translation vector is given by:
ˆt = B − A ˆRT
T
J n (5.7)
Equations (5.6) and (5.7) allow us to infer the motion of the mobile platform between
the instants t and t′
. The only iterative part of the algorithm involves taking the
single value decomposition of a 2 × 2 matrix. It should also be pointed out that
the estimated values ˆR and ˆt are optimal in a least squares sense. However, each of
the object locations stored in the matrices A and B is equally weighted. An improved
solution can be obtained if the Gaussian probability distribution of the object locations
is taken into account. Let Ci and Ci
′
denote the covariance matrices corresponding to
the position of point p i at time t and the corresponding point p ′
i at time t′
respectively.
If n again denotes the number of points used to infer the platform motion, then the
function to be optimitzed is:
n
i=1
ǫT
i Wiǫi (5.8)
with ǫi = pi − Rpi
′
− t and Wi = RCiRT
+ Ci
′ −1
.
This problem can be solved iteratively with an optimization routine (e.g. the Gauss-
Newton algorithm), using the previously computed ˆR and ˆt as an initial estimate. The
obtained solution weights the point locations contained in A and B according to their
uncertainty. A similar matrix-weighted least squares method can be found in [10].
The motion solving algorithm described above is used in this thesis to determine
the initial and final pose of the mobile platform by detecting landmarks in the first and
last image frames captured with the stereo camera. In this case the matrix B contains
the absolute position of the detected landmarks and the matrix A the position of the
detected landmarks measured by the stereo camera in the platform coordinate frame.
CHAPTER 5. IMPROVING LOCALIZATION WITH A STEREO CAMERA 66
The absolute position of all available landmarks is precisely measured beforehand and
stored in a database. The motion solving algorithm is used to infer the translation and
rotation necessary to map the absolute coordinate frame onto the platform coordinate
frame, thus yielding the position and orientation of the mobile platform with respect to
the absolute coordinate frame. Note that in equation (5.8) the weight matrices Wi can
be set equal to Wi = Ci
′−1
in this case, since the uncertainty of the precisely measured
landmark positions in the absolute coordinate frame Ci is supposed to be negligible.
The matrices Ci
′
can also be used to determine the uncertainty in the computed pose.
In the case of the initial pose the inferred pose uncertainty will serve to initialize the
state covariance matrix P0 in the time update equations of the Extended Kalman Filter
(see Section 4.1).
5.8 Including Landmark Detection in the EKF
In order to improve the localization of the mobile platform through image processing,
the lab environment is equipped with five artificial landmarks distributed on two ad-
jacent walls. The landmarks consist of white disks of approximately 15 cm in diameter
on black cardboard. The high contrast between landmark and background makes the
landmark detection particularly efficient. The arrangement of the landmarks can be
seen in Figure 5.11 and is also illustrated in Figure 5.3. The absolute position of all
landmarks is precisely measured and stored in a database.
In order to include landmark detection in the sensor fusion algorithm, some new
concepts must be introduced. These will enable us to use the Extended Kalman Filter
algorithm to improve the pose estimation of the mobile platform. The procedure is
described below and can be visualized in Figure 5.12.
• Assuming that a total of n landmarks are available, define an augmented state
vector consisting of the platform pose and the position of the landmarks in the
platform coordinate frame:
xa
= x P
l1 . . . P
ln
T
(5.9)
Jung.Rapport
Jung.Rapport
Jung.Rapport
Jung.Rapport
Jung.Rapport
Jung.Rapport
Jung.Rapport
Jung.Rapport
Jung.Rapport
Jung.Rapport
Jung.Rapport
Jung.Rapport
Jung.Rapport
Jung.Rapport
Jung.Rapport
Jung.Rapport
Jung.Rapport
Jung.Rapport
Jung.Rapport
Jung.Rapport
Jung.Rapport
Jung.Rapport
Jung.Rapport
Jung.Rapport
Jung.Rapport
Jung.Rapport
Jung.Rapport
Jung.Rapport

Más contenido relacionado

La actualidad más candente

Mark Quinn Thesis
Mark Quinn ThesisMark Quinn Thesis
Mark Quinn ThesisMark Quinn
 
Morton john canty image analysis and pattern recognition for remote sensing...
Morton john canty   image analysis and pattern recognition for remote sensing...Morton john canty   image analysis and pattern recognition for remote sensing...
Morton john canty image analysis and pattern recognition for remote sensing...Kevin Peña Ramos
 
KINEMATICS, TRAJECTORY PLANNING AND DYNAMICS OF A PUMA 560 - Mazzali A., Patr...
KINEMATICS, TRAJECTORY PLANNING AND DYNAMICS OF A PUMA 560 - Mazzali A., Patr...KINEMATICS, TRAJECTORY PLANNING AND DYNAMICS OF A PUMA 560 - Mazzali A., Patr...
KINEMATICS, TRAJECTORY PLANNING AND DYNAMICS OF A PUMA 560 - Mazzali A., Patr...AlessandroMazzali
 
Ellum, C.M. (2001). The development of a backpack mobile mapping system
Ellum, C.M. (2001). The development of a backpack mobile mapping systemEllum, C.M. (2001). The development of a backpack mobile mapping system
Ellum, C.M. (2001). The development of a backpack mobile mapping systemCameron Ellum
 
23540646 carbon-c13-nmr-spectroscopy
23540646 carbon-c13-nmr-spectroscopy23540646 carbon-c13-nmr-spectroscopy
23540646 carbon-c13-nmr-spectroscopydharma281276
 
Badripatro dissertation 09307903
Badripatro dissertation 09307903Badripatro dissertation 09307903
Badripatro dissertation 09307903patrobadri
 
Maxime Javaux - Automated spike analysis
Maxime Javaux - Automated spike analysisMaxime Javaux - Automated spike analysis
Maxime Javaux - Automated spike analysisMaxime Javaux
 
Implementation of a Localization System for Sensor Networks-berkley
Implementation of a Localization System for Sensor Networks-berkleyImplementation of a Localization System for Sensor Networks-berkley
Implementation of a Localization System for Sensor Networks-berkleyFarhad Gholami
 
Principles of Linear Motion of Objects - Physics - Explained by Arun Umrao
Principles of Linear Motion of Objects - Physics - Explained by Arun UmraoPrinciples of Linear Motion of Objects - Physics - Explained by Arun Umrao
Principles of Linear Motion of Objects - Physics - Explained by Arun Umraossuserd6b1fd
 
Quantum Variables in Finance and Neuroscience Lecture Slides
Quantum Variables in Finance and Neuroscience Lecture SlidesQuantum Variables in Finance and Neuroscience Lecture Slides
Quantum Variables in Finance and Neuroscience Lecture SlidesLester Ingber
 
Ali-Dissertation-5June2015
Ali-Dissertation-5June2015Ali-Dissertation-5June2015
Ali-Dissertation-5June2015Ali Farznahe Far
 
Pulse Preamplifiers for CTA Camera Photodetectors
Pulse Preamplifiers for CTA Camera PhotodetectorsPulse Preamplifiers for CTA Camera Photodetectors
Pulse Preamplifiers for CTA Camera Photodetectorsnachod40
 
Nonlinear Simulation of Rotor-Bearing System Dynamics
Nonlinear Simulation of Rotor-Bearing System DynamicsNonlinear Simulation of Rotor-Bearing System Dynamics
Nonlinear Simulation of Rotor-Bearing System DynamicsFrederik Budde
 
MACHINE LEARNING METHODS FOR THE
MACHINE LEARNING METHODS FOR THEMACHINE LEARNING METHODS FOR THE
MACHINE LEARNING METHODS FOR THEbutest
 

La actualidad más candente (19)

Mark Quinn Thesis
Mark Quinn ThesisMark Quinn Thesis
Mark Quinn Thesis
 
Morton john canty image analysis and pattern recognition for remote sensing...
Morton john canty   image analysis and pattern recognition for remote sensing...Morton john canty   image analysis and pattern recognition for remote sensing...
Morton john canty image analysis and pattern recognition for remote sensing...
 
KINEMATICS, TRAJECTORY PLANNING AND DYNAMICS OF A PUMA 560 - Mazzali A., Patr...
KINEMATICS, TRAJECTORY PLANNING AND DYNAMICS OF A PUMA 560 - Mazzali A., Patr...KINEMATICS, TRAJECTORY PLANNING AND DYNAMICS OF A PUMA 560 - Mazzali A., Patr...
KINEMATICS, TRAJECTORY PLANNING AND DYNAMICS OF A PUMA 560 - Mazzali A., Patr...
 
TGNDissertationMain
TGNDissertationMainTGNDissertationMain
TGNDissertationMain
 
Ellum, C.M. (2001). The development of a backpack mobile mapping system
Ellum, C.M. (2001). The development of a backpack mobile mapping systemEllum, C.M. (2001). The development of a backpack mobile mapping system
Ellum, C.M. (2001). The development of a backpack mobile mapping system
 
23540646 carbon-c13-nmr-spectroscopy
23540646 carbon-c13-nmr-spectroscopy23540646 carbon-c13-nmr-spectroscopy
23540646 carbon-c13-nmr-spectroscopy
 
Sarda_uta_2502M_12076
Sarda_uta_2502M_12076Sarda_uta_2502M_12076
Sarda_uta_2502M_12076
 
Diederik Fokkema - Thesis
Diederik Fokkema - ThesisDiederik Fokkema - Thesis
Diederik Fokkema - Thesis
 
Badripatro dissertation 09307903
Badripatro dissertation 09307903Badripatro dissertation 09307903
Badripatro dissertation 09307903
 
Maxime Javaux - Automated spike analysis
Maxime Javaux - Automated spike analysisMaxime Javaux - Automated spike analysis
Maxime Javaux - Automated spike analysis
 
Neural networks and deep learning
Neural networks and deep learningNeural networks and deep learning
Neural networks and deep learning
 
Implementation of a Localization System for Sensor Networks-berkley
Implementation of a Localization System for Sensor Networks-berkleyImplementation of a Localization System for Sensor Networks-berkley
Implementation of a Localization System for Sensor Networks-berkley
 
Principles of Linear Motion of Objects - Physics - Explained by Arun Umrao
Principles of Linear Motion of Objects - Physics - Explained by Arun UmraoPrinciples of Linear Motion of Objects - Physics - Explained by Arun Umrao
Principles of Linear Motion of Objects - Physics - Explained by Arun Umrao
 
Quantum Variables in Finance and Neuroscience Lecture Slides
Quantum Variables in Finance and Neuroscience Lecture SlidesQuantum Variables in Finance and Neuroscience Lecture Slides
Quantum Variables in Finance and Neuroscience Lecture Slides
 
Ali-Dissertation-5June2015
Ali-Dissertation-5June2015Ali-Dissertation-5June2015
Ali-Dissertation-5June2015
 
Pulse Preamplifiers for CTA Camera Photodetectors
Pulse Preamplifiers for CTA Camera PhotodetectorsPulse Preamplifiers for CTA Camera Photodetectors
Pulse Preamplifiers for CTA Camera Photodetectors
 
Nonlinear Simulation of Rotor-Bearing System Dynamics
Nonlinear Simulation of Rotor-Bearing System DynamicsNonlinear Simulation of Rotor-Bearing System Dynamics
Nonlinear Simulation of Rotor-Bearing System Dynamics
 
MACHINE LEARNING METHODS FOR THE
MACHINE LEARNING METHODS FOR THEMACHINE LEARNING METHODS FOR THE
MACHINE LEARNING METHODS FOR THE
 
Optimal control systems
Optimal control systemsOptimal control systems
Optimal control systems
 

Destacado

Apresentação digital arquivar +dp
Apresentação digital arquivar +dpApresentação digital arquivar +dp
Apresentação digital arquivar +dpLUIZ FABIANO DOMINGOS
 
вдв 03 (717)
вдв 03 (717)вдв 03 (717)
вдв 03 (717)vdv_volsk
 
Thomas Selso CV 2017 Dansk
Thomas Selso CV 2017 DanskThomas Selso CV 2017 Dansk
Thomas Selso CV 2017 DanskThomas Selsø
 
Separation of rhodium III and iridium IV chlorido complexes using polymer mic...
Separation of rhodium III and iridium IV chlorido complexes using polymer mic...Separation of rhodium III and iridium IV chlorido complexes using polymer mic...
Separation of rhodium III and iridium IV chlorido complexes using polymer mic...Avela Majavu (PhD)
 
Jak zwiększyliśmy ruch organiczny o 275% dla centrumtlumaczen.pl dzięki pozyc...
Jak zwiększyliśmy ruch organiczny o 275% dla centrumtlumaczen.pl dzięki pozyc...Jak zwiększyliśmy ruch organiczny o 275% dla centrumtlumaczen.pl dzięki pozyc...
Jak zwiększyliśmy ruch organiczny o 275% dla centrumtlumaczen.pl dzięki pozyc...EACTIVE wiemy jak
 
Pemvha foros
Pemvha forosPemvha foros
Pemvha forosPEMVHA
 
USO DE BIBLIOTECA UNIMINUTO
USO DE BIBLIOTECA UNIMINUTOUSO DE BIBLIOTECA UNIMINUTO
USO DE BIBLIOTECA UNIMINUTOCarolina Parra
 

Destacado (13)

Apresentação digital arquivar +dp
Apresentação digital arquivar +dpApresentação digital arquivar +dp
Apresentação digital arquivar +dp
 
вдв 03 (717)
вдв 03 (717)вдв 03 (717)
вдв 03 (717)
 
University Certification
University CertificationUniversity Certification
University Certification
 
Thomas Selso CV 2017 Dansk
Thomas Selso CV 2017 DanskThomas Selso CV 2017 Dansk
Thomas Selso CV 2017 Dansk
 
Separation of rhodium III and iridium IV chlorido complexes using polymer mic...
Separation of rhodium III and iridium IV chlorido complexes using polymer mic...Separation of rhodium III and iridium IV chlorido complexes using polymer mic...
Separation of rhodium III and iridium IV chlorido complexes using polymer mic...
 
Disco
DiscoDisco
Disco
 
Jak zwiększyliśmy ruch organiczny o 275% dla centrumtlumaczen.pl dzięki pozyc...
Jak zwiększyliśmy ruch organiczny o 275% dla centrumtlumaczen.pl dzięki pozyc...Jak zwiększyliśmy ruch organiczny o 275% dla centrumtlumaczen.pl dzięki pozyc...
Jak zwiększyliśmy ruch organiczny o 275% dla centrumtlumaczen.pl dzięki pozyc...
 
No Smoking
No SmokingNo Smoking
No Smoking
 
Pemvha foros
Pemvha forosPemvha foros
Pemvha foros
 
NAELA Elder Law Month
NAELA Elder Law MonthNAELA Elder Law Month
NAELA Elder Law Month
 
USO DE BIBLIOTECA UNIMINUTO
USO DE BIBLIOTECA UNIMINUTOUSO DE BIBLIOTECA UNIMINUTO
USO DE BIBLIOTECA UNIMINUTO
 
DSS GDM final presentation pp (4)
DSS GDM final presentation pp (4)DSS GDM final presentation pp (4)
DSS GDM final presentation pp (4)
 
Sarahme
SarahmeSarahme
Sarahme
 

Similar a Jung.Rapport

NEW METHODS FOR TRIANGULATION-BASED SHAPE ACQUISITION USING LASER SCANNERS.pdf
NEW METHODS FOR TRIANGULATION-BASED SHAPE ACQUISITION USING LASER SCANNERS.pdfNEW METHODS FOR TRIANGULATION-BASED SHAPE ACQUISITION USING LASER SCANNERS.pdf
NEW METHODS FOR TRIANGULATION-BASED SHAPE ACQUISITION USING LASER SCANNERS.pdfTrieuDoMinh
 
Lecture notes on planetary sciences and orbit determination
Lecture notes on planetary sciences and orbit determinationLecture notes on planetary sciences and orbit determination
Lecture notes on planetary sciences and orbit determinationErnst Schrama
 
Real-time and high-speed vibrissae monitoring with dynamic vision sensors and...
Real-time and high-speed vibrissae monitoring with dynamic vision sensors and...Real-time and high-speed vibrissae monitoring with dynamic vision sensors and...
Real-time and high-speed vibrissae monitoring with dynamic vision sensors and...Aryan Esfandiari
 
Au anthea-ws-201011-ma sc-thesis
Au anthea-ws-201011-ma sc-thesisAu anthea-ws-201011-ma sc-thesis
Au anthea-ws-201011-ma sc-thesisevegod
 
Efficiency Optimization of Realtime GPU Raytracing in Modeling of Car2Car Com...
Efficiency Optimization of Realtime GPU Raytracing in Modeling of Car2Car Com...Efficiency Optimization of Realtime GPU Raytracing in Modeling of Car2Car Com...
Efficiency Optimization of Realtime GPU Raytracing in Modeling of Car2Car Com...Alexander Zhdanov
 
grDirkEkelschotFINAL__2_
grDirkEkelschotFINAL__2_grDirkEkelschotFINAL__2_
grDirkEkelschotFINAL__2_Dirk Ekelschot
 
Bachelor Thesis .pdf (2010)
Bachelor Thesis .pdf (2010)Bachelor Thesis .pdf (2010)
Bachelor Thesis .pdf (2010)Dimitar Dimitrov
 
Thesis_Eddie_Zisser_final_submission
Thesis_Eddie_Zisser_final_submissionThesis_Eddie_Zisser_final_submission
Thesis_Eddie_Zisser_final_submissionEddie Zisser
 

Similar a Jung.Rapport (20)

NEW METHODS FOR TRIANGULATION-BASED SHAPE ACQUISITION USING LASER SCANNERS.pdf
NEW METHODS FOR TRIANGULATION-BASED SHAPE ACQUISITION USING LASER SCANNERS.pdfNEW METHODS FOR TRIANGULATION-BASED SHAPE ACQUISITION USING LASER SCANNERS.pdf
NEW METHODS FOR TRIANGULATION-BASED SHAPE ACQUISITION USING LASER SCANNERS.pdf
 
Lecture notes on planetary sciences and orbit determination
Lecture notes on planetary sciences and orbit determinationLecture notes on planetary sciences and orbit determination
Lecture notes on planetary sciences and orbit determination
 
Intro photo
Intro photoIntro photo
Intro photo
 
Thesis
ThesisThesis
Thesis
 
Thesis
ThesisThesis
Thesis
 
Real-time and high-speed vibrissae monitoring with dynamic vision sensors and...
Real-time and high-speed vibrissae monitoring with dynamic vision sensors and...Real-time and high-speed vibrissae monitoring with dynamic vision sensors and...
Real-time and high-speed vibrissae monitoring with dynamic vision sensors and...
 
D-STG-SG02.16.1-2001-PDF-E.pdf
D-STG-SG02.16.1-2001-PDF-E.pdfD-STG-SG02.16.1-2001-PDF-E.pdf
D-STG-SG02.16.1-2001-PDF-E.pdf
 
Manual Gstat
Manual GstatManual Gstat
Manual Gstat
 
thesis
thesisthesis
thesis
 
Au anthea-ws-201011-ma sc-thesis
Au anthea-ws-201011-ma sc-thesisAu anthea-ws-201011-ma sc-thesis
Au anthea-ws-201011-ma sc-thesis
 
Efficiency Optimization of Realtime GPU Raytracing in Modeling of Car2Car Com...
Efficiency Optimization of Realtime GPU Raytracing in Modeling of Car2Car Com...Efficiency Optimization of Realtime GPU Raytracing in Modeling of Car2Car Com...
Efficiency Optimization of Realtime GPU Raytracing in Modeling of Car2Car Com...
 
grDirkEkelschotFINAL__2_
grDirkEkelschotFINAL__2_grDirkEkelschotFINAL__2_
grDirkEkelschotFINAL__2_
 
MS_Thesis
MS_ThesisMS_Thesis
MS_Thesis
 
time_series.pdf
time_series.pdftime_series.pdf
time_series.pdf
 
phd_unimi_R08725
phd_unimi_R08725phd_unimi_R08725
phd_unimi_R08725
 
book_dziekan
book_dziekanbook_dziekan
book_dziekan
 
eur22904en.pdf
eur22904en.pdfeur22904en.pdf
eur22904en.pdf
 
Bachelor Thesis .pdf (2010)
Bachelor Thesis .pdf (2010)Bachelor Thesis .pdf (2010)
Bachelor Thesis .pdf (2010)
 
BenThesis
BenThesisBenThesis
BenThesis
 
Thesis_Eddie_Zisser_final_submission
Thesis_Eddie_Zisser_final_submissionThesis_Eddie_Zisser_final_submission
Thesis_Eddie_Zisser_final_submission
 

Jung.Rapport

  • 1. Research of a Position Estimator for a Wheel-based Mobile Platform AUTOMATIC CONTROL LABORATORY Ecole Polytechnique F´ed´erale de Lausanne Dr. Denis Gillet, MER INSTITUTE OF AUTOMATIC CONTROL ENGINEERING Technische Universit¨at M¨unchen Univ.-Professor Dr.-Ing./Univ. Tokio Martin Buss Master’s Thesis cand.ing. Alexandre Jung born March 25, 1981 Tutors: Dr. Denis Gillet (MER), Dipl.-Ing. Ulrich Unterhinninghofen, Dipl.-Ing. Kolja K¨uhnlenz Begin of work: 21. 03. 2005 Intermediate report: 06. 06. 2005 Final report: 22. 07. 2005
  • 2. Abstract In this thesis a self localization algorithm for an omnidirectional wheeled mobile plat- form is described. An Extended Kalman Filter is used to fuse the measurements of multiple sensors: odometry, a gyroscope, a magnetic tracker and a stereo camera. The development of sensor error models is followed by a description of the sensor fusion al- gorithm. This algorithm comprises two separate steps in which the sensors are grouped according to the incremental or absolute nature of their measurements. The second part of this work is dedicated to the integration of stereo image processing in the lo- calization procedure. A simple vision based landmark extraction algorithm combined with stereo triangulation is integrated in the Extended Kalman Filter to improve pose estimates. Finally experimental results demonstrate the efficiency of the presented algorithms.
  • 3. Acknowledgment I would like to thank Professor Martin Buss for giving me the opportunity to write my Master’s Thesis at the Institute of Automatic Control Engineering (LSR), Tech- nische Universit¨at M¨unchen . I am also deeply grateful to Professor Michel Deville and Dr. Denis Gillet for enabling the cooperation between the Automatic Control Laboratory (LA), Ecole Polytechnique F´ed´erale de Lausanne and the LSR in the frame of this thesis. I would also like to express my deepest gratitude to my supervisors at LSR Ulrich Unterhinninghofen and Kolja K¨uhnlenz for their valuable help and assistance in the course of this thesis. I also wish to thank my family and especially my parents for their encouragement during my studies. Alexandre Jung alexandre.jung@epfl.ch
  • 4. Contents 1 Motivation and Problem Definition 1 2 Kinematical Model 4 2.1 Computing Incremental Position Updates . . . . . . . . . . . . . . . . . 5 2.1.1 Least Squares Solution . . . . . . . . . . . . . . . . . . . . . . . 6 2.1.2 Differential Drive Solution . . . . . . . . . . . . . . . . . . . . . 8 2.1.3 Experimental Comparison of Both Solutions . . . . . . . . . . . 9 2.2 Absolute Position Updating . . . . . . . . . . . . . . . . . . . . . . . . 14 3 Sensor Error Models 17 3.1 Odometry Error Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.1.1 Pose Uncertainty Model . . . . . . . . . . . . . . . . . . . . . . 19 3.1.2 Analytical Expression for ¯V o . . . . . . . . . . . . . . . . . . . . 22 3.1.3 Experimental Results and Analysis . . . . . . . . . . . . . . . . 23 3.2 Gyroscope Error Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.2.1 Systematic and Non Systematic Errors . . . . . . . . . . . . . . 30 3.2.2 Experimental Determination of ˙ψg drift and ˙V g . . . . . . . . . . 30 i
  • 5. CONTENTS ii 3.2.3 Experimental Determination of Kg and ¯V g . . . . . . . . . . . . 32 3.3 Magnetic Tracker Error Model . . . . . . . . . . . . . . . . . . . . . . . 34 3.3.1 Systematic and Non Systematic Errors . . . . . . . . . . . . . . 34 3.3.2 Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 4 Sensor Fusion 39 4.1 The Extended Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . 40 4.2 Delayed-State Measurement Problem . . . . . . . . . . . . . . . . . . . 42 4.3 Merging of Incremental Pose Estimates . . . . . . . . . . . . . . . . . . 43 4.4 Including Absolute Position Updates . . . . . . . . . . . . . . . . . . . 45 5 Improving Localization with a Stereo Camera 48 5.1 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 5.2 Hardware Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 5.3 Depth Map Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 5.4 Coordinate Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 5.5 Object Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 5.5.1 Stereo Camera Model . . . . . . . . . . . . . . . . . . . . . . . . 52 5.5.2 Triangulation Equations of the Reconstruction Process . . . . . 54 5.6 Systematic and Non Systematic Errors . . . . . . . . . . . . . . . . . . 54 5.6.1 Camera Calibration . . . . . . . . . . . . . . . . . . . . . . . . . 54 5.6.2 Non Systematic Errors . . . . . . . . . . . . . . . . . . . . . . . 59 5.7 Motion Solving Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 63
  • 6. CONTENTS iii 5.8 Including Landmark Detection in the EKF . . . . . . . . . . . . . . . . 66 5.8.1 Modification of the Time Update Equations . . . . . . . . . . . 68 5.8.2 Modification of the Measurement Update Equations . . . . . . . 71 5.8.3 Filter Initialization . . . . . . . . . . . . . . . . . . . . . . . . . 73 6 Experimental Results 75 6.1 Pose Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 6.1.1 Straight Track . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 6.1.2 Circular Track . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 6.2 Absolute Position Update Frequency . . . . . . . . . . . . . . . . . . . 79 6.3 Filter Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 6.4 Stereo Image Processing . . . . . . . . . . . . . . . . . . . . . . . . . . 82 7 Conclusions and Future Work 88 A Fundamentals on Stochastic Theory 90
  • 7. List of Figures 1.1 Mobile platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.1 Schematic top view of the mobile platform . . . . . . . . . . . . . . . . 4 2.2 Inadmissible wheel configuration . . . . . . . . . . . . . . . . . . . . . . 6 2.3a Estimated trajectory using dd solution . . . . . . . . . . . . . . . . . . 11 2.3b Estimated trajectory using lsq solution . . . . . . . . . . . . . . . . . . 11 2.4a Estimated orientation using dd solution . . . . . . . . . . . . . . . . . . 12 2.4b Estimated orientation using lsq solution . . . . . . . . . . . . . . . . . . 12 3.1 Distribution of drift compensated gyro measurements ( ˙ψ = 0) . . . . . 32 3.2 Isolines of the magnetic tracker measurement coordinates . . . . . . . . 35 3.3 Comparison of tracker measurements before and after calibration . . . . 37 4.1 Diagram showing the implemented Kalman Filter structure . . . . . . . 44 5.1 Stereo camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 5.2 Disparity in stereo images . . . . . . . . . . . . . . . . . . . . . . . . . 50 5.3 Coordinate systems in lab environment . . . . . . . . . . . . . . . . . . 51 5.4 Stereo camera coordinate systems . . . . . . . . . . . . . . . . . . . . . 52 iv
  • 8. LIST OF FIGURES v 5.5 Stereo geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 5.6 Camera calibration setup . . . . . . . . . . . . . . . . . . . . . . . . . . 56 5.7 Comparison of ideal and extracted pixel coordinates in calibration pattern 57 5.8 Reconstructed marker locations . . . . . . . . . . . . . . . . . . . . . . 58 5.9 Uncertainty in stereo image processing . . . . . . . . . . . . . . . . . . 60 5.10 Uncertainty in object locations through stereo image processing . . . . 62 5.11 Lab environment equipped with landmarks . . . . . . . . . . . . . . . . 67 5.12 Block diagram of localization algorithm using stereo image processing . 69 6.1 Influence of absolute position update frequency on pose error . . . . . . 80 6.2 Effect of incorrect filter initialization on pose estimation . . . . . . . . 81 6.3 Commanded platform motion for image processing experiment . . . . . 84 6.4 Estimated trajectory without sensor fusion (odometry alone) . . . . . . 85 6.5 Estimated trajectory using stereo image processing . . . . . . . . . . . 86
  • 9. List of Tables 2.1a Pose error using odometry (circle, R = 1.75 m), IWC . . . . . . . . . . 13 2.1b Pose error using odometry (circle, R = 1 m), AWC . . . . . . . . . . . 13 3.1 Experimental results for the determination of ¯V o (straight line motion) 25 3.2 Experimental results for the determination of ¯V o (circular motion) . . . 26 3.3 Average Kg for different yaw rates ˙ψ . . . . . . . . . . . . . . . . . . . 33 3.4 Value of ¯V g for different yaw rates ˙ψ . . . . . . . . . . . . . . . . . . . 34 4.1a EKF Time Update Equations . . . . . . . . . . . . . . . . . . . . . . . 42 4.1b EKF Measurement Update Equations . . . . . . . . . . . . . . . . . . . 42 6.1 Pose error for a 4000 mm straight track, velocity 1 m/s . . . . . . . . . 76 6.2a Pose error for a circular track (R = 1.75 m, 0.3 m/s, IWC) . . . . . . . 78 6.2b Pose error for a circular track (R = 1 m, 0.3 m/s, AWC) . . . . . . . . 78 6.3 CPU time required for localization algorithm (experiment: 60 s) . . . . . . 87 vi
  • 10. List of Symbols (O, x, y) absolute coordinate frame P, P x, P y mobile platform coordinate frame C, C x, C y stereo camera coordinate frame Ak Jacobian matrix of Kalman Filter at time step k b half the stereo camera baseline P b i position of feature i in the platform coordinate frame Cx horizontal pixel coordinate of image center Co ∆si odometry error covariance matrix Co ∆P x odometry error covariance matrix dx distance separating two CCD sensor elements elat lateral position estimate error w.r.t the platform coordinate frame elong longitudinal position estimate error w.r.t the platform coordinate frame e x pose estimate error in absolute coordinate frame f focal length of stereo camera f (· ) process model of Kalman Filter idrive transmission ratio from driving motor to wheel revolution angle isteer transmission ratio from steering motor to wheel steering angle Jc Jacobian matrix of stereo reconstruction process Kg gyroscope scaling factor Kk gain factor of Kalman Filter measurement update equations at time step k l longitudinal wheel base of mobile platform P l i position of landmark i in the platform coordinate frame Pk state covariance matrix of Kalman Filter at time step k vii
  • 11. Pdr, k state covariance matrix of dead reckoning process at time step k Qk process noise covariance matrix of Kalman Filter at time step k Rk measurement noise covariance matrix of Kalman filter at time step k Rabs, k rotation matrix from absolute to platform coordinate frame Rin, k rotation increment matrix of platform motion ul, ur horizontal image plane coordinates of left and right cameras ul, M , ur, M horizontal image pixel coordinates of left and right cameras vi velocity of wheel i at contact point with floor (i ∈ {1, 2, 3, 4}) vi, x, vi, y velocity components of wheel i in platform coordinate frame ˙V g time based noise variance basis of gyroscope ¯V g yaw rate based noise variance basis of gyroscope vk measurement noise of Kalman Filter at time step k ¯V o odometry variance basis V c l , V c r pixel based variance of left and right cameras w lateral wheel base of mobile platform wk process noise of Kalman Filter at time step k Wk Jacobian matrix of Kalman Filter at time step k x pose vector of mobile platform in absolute coordinate frame ˙x velocity vector of mobile platform in absolute coordinate frame xa augmented state vector including landmark locations P ˙x velocity vector of mobile platform in platform coordinate frame xk state vector of Kalman Filter at time step k ˆx− k a priori state estimate of Kalman Filter at time step k ˆxk a posteriori state estimate of Kalman Filter at time step k xt position measurement vector of magnetic tracking device P xi, P yi position of wheel module i in platform coordinate frame (i ∈ {1, 2, 3, 4}) zk measurement vector of Kalman Filter at time step k ∆s actual traveled distance of mobile platform in one sample time interval ∆si actual traveled distance of wheel i (i ∈ {1, 2, 3, 4}) ∆so i traveled distance of wheel i measured by odometry (i ∈ {1, 2, 3, 4}) ∆tg gyroscope sample time interval
  • 12. ∆to odometry sample time interval ∆x k pose increment vector of mobile platform in absolute coordinate frame ∆P x k pose increment vector of mobile platform in platform coordinate frame ϕi steering angle of wheel i (i ∈ {1, 2, 3, 4}) ∆ψg orientation increment measured by gyroscope ˙ψg drift gyroscope drift
  • 13. Chapter 1 Motivation and Problem Definition Generally speaking the motion of an autonomous wheeled mobile robot features three degrees of freedom: its position in a plane (two d.o.f.) and its orientation. In order for the robot to successfully accomplish its mission a precise information about its pose (position and orientation) is required. Various sensors can be used to gather information about the robot’s pose. Inter- nal sensors on the robot provide information about its internal state (wheel angles or angular velocities, yaw rate, . . . ). Examples of internal sensors are wheel encoders, a gyroscope or an accelerometer. Localization methods based on these types of sensors are known as dead reckoning methods. Internal sensors usually present a relatively high accuracy and sample rate. The principle of dead reckoning implies that position increments are used to update an absolute position. Due to the nature of this process, measurement errors and inaccuracies accumulate and increase the position uncertainty with traveled distance. External sensors provide information about the robot’s envi- ronment. Examples of external sensors are a laser range finder, a magnetic tracker or a global positioning system (GPS). Usually, the information returned by external sen- sors is an absolute position with respect to the environment. External sensors generally present a relatively low accuracy and sample rate. The basic idea of a sensor fusion algorithm is to maximize the likelihood of a quantity (e.g. position) given redundant or complementary measurements originating 1
  • 14. CHAPTER 1. MOTIVATION AND PROBLEM DEFINITION 2 from different sensor devices. It is thus possible to combine the advantages of both types of sensors: absolute position updates are periodically merged with the highly accurate position information of a dead reckoning algorithm, thus resetting the growing position uncertainty. This work focuses on the development of a position estimator for an omnidirectional wheeled mobile platform. The objective is to determine the pose of the mobile platform as accurately as possible using multiple sensor devices. The term pose designates the position as well as the orientation. The sensors available for this task are the following: • wheel encoders • a gyroscope • a magnetic tracker • a stereo camera The disposition of these devices on the mobile platform can be seen in Figure 1.1. It shall be emphasized that this is an omnidirectional platform, since independent steering angles and wheel velocities can be assigned to each of the four wheel modules. This enables decoupled motion in the three available degrees of freedom: longitudinal velocity, lateral velocity and yaw rate. This report is organized as follows: Chapter 2 establishes the mathematical de- scription of the platform kinematics and its numerical implementation. Chapter 3 establishes mathematical error models for the various sensors in use on the mobile platform. Chapter 4 presents the sensor fusion strategy used. Chapter 5 introduces some techniques for improved localization using image pairs provided by a stereo cam- era. Chapter 6 gives an overview of the conducted experiments and simulations and their results. An assessment of the results and an outlook of the future work to be done are given in Chapter 7.
  • 15. CHAPTER 1. MOTIVATION AND PROBLEM DEFINITION 3 Stereo Camera Magnetic Tracker Receiver Wheel module Gyroscope Figure 1.1: Mobile platform
  • 16. Chapter 2 Kinematical Model O x y ψ WM 1 WM 2 WM 3 WM 4 2l 2w Px P y P(x,y) 2rw ϕ2 ϕ1 ϕ3 ϕ4 Figure 2.1: Schematic top view of the mobile platform A schematic view of the mobile platform containing the most important parameters for the kinematical model is shown in Figure 2.1. The wheel modules are labelled WM1 to WM4. 4
  • 17. CHAPTER 2. KINEMATICAL MODEL 5 Generally, the use of a kinematic model involves the relations between the coordi- nates in two different spaces. A vector q in joint space is transformed into a vector x in Cartesian space. In the case of the presented mobile platform, the angular veloci- ties of the wheels are transformed into the linear velocities and the yaw rate (angular velocity around the vertical axis) of the robot in the inertial coordinate frame. We will subdivide this procedure into two separate steps: • transformation of wheel velocities vi (i ∈ {1, 2, 3, 4}) into platform velocities P ˙x in the platform coordinate frame P, P x, P y . This operation will be dealt with in Section 2.1. • transformation of platform velocities P ˙x in the platform coordinate frame into platform velocities ˙x in the inertial coordinate frame (O,x,y). This operation will be dealt with in Section 2.2. The platform velocity vectors are given by P ˙x = P ˙x P ˙y ˙ψ T and ˙x = ˙x ˙y ˙ψ T and include the yaw rate ˙ψ. 2.1 Computing Incremental Position Updates The mobile platform used in this work is equipped with four wheel modules, each of which is fitted with two electric motors. One of the electric motors commands the steering angle and the other one the revolution speed of the wheel. This amounts to eight independent inputs available to command the platform motion. Since the platform has only three degrees of freedom in the absolute coordinate frame (x, y and ψ), the system of equations governing the platform motion is overdetermined. An exact solution can only be found if the imaginary lines passing through the wheel revolution axes intersect in one and the same point. If the steering angles of the four wheel modules are not perfectly coordinated this condition is violated, as illustrated in Figure 2.2. As a result wheel slip will occur. The center of the circle arc on which the platform center point is moving will be somewhere in the region spanned by all wheel
  • 18. CHAPTER 2. KINEMATICAL MODEL 6 Figure 2.2: Inadmissible wheel configuration axes intersection points. A similar situation is encountered when the revolution speeds of the individual wheels are not perfectly coordinated. This redundancy needs to be taken into account when developing a kinematical model of the mobile platform. Two possible methods for solving this problem are proposed here. These methods are described in the next sections. 2.1.1 Least Squares Solution Given a platform motion in its own coordinate frame P ˙x = P ˙x P ˙y ˙ψ T the cor- responding wheel velocity components of wheel i (i ∈ {1, 2, 3, 4}) are given by the following set of equations: vi, x = P ˙x − ˙ψ · P yi vi, y = P ˙y + ˙ψ · P xi (2.1)
  • 19. CHAPTER 2. KINEMATICAL MODEL 7 ϕi Pvi Pvi,x Pvi,y Px Py (Pxi,Pyi) Here vi, x and vi, y denote the wheel velocity components in the platform coordinate frame, P xi and P yi denote the coordinates of the contact point of the wheel with the floor in the platform coordinate frame. This is illustrated in the adjacent figure. In order to compute the platform motion given the velocity components of each wheel, the system of eight equations obtained by concatenating the systems of equations (2.1) for all wheels must be solved for P ˙x, P ˙y and ˙ψ. We define matrix M as follows: M =                     1 0 −P y1 0 1 P x1 1 0 −P y2 0 1 P x2 1 0 −P y3 0 1 P x3 1 0 −P y4 0 1 P x4                     The system of equations (2.1) can now be written as follows: v = M P ˙x (2.2) with v = v1, x v1, y . . . v4, x v4, y T Because M is not a square matrix, we cannot solve system (2.2) for the unknown P ˙x by matrix inversion. System (2.2) is said to be overdetermined, which was to be expected considering the kinematical redundancy discussed above. Left multiplying both sides of equation (2.2) by MT , then by MT M −1 , we obtain: P ˙x = MT M −1 MT v (2.3) MT M −1 MT is the pseudoinverse of matrix M. The solution P ˙x minimizes the residue of each equation contained in the system (2.1) in a least square sense.
  • 20. CHAPTER 2. KINEMATICAL MODEL 8 2.1.2 Differential Drive Solution For a differential drive robot with two wheels, the kinematical equations are given by: P ˙x = (vr + vl) 2 P ˙y = 0 ˙ψ = (vr − vl) 2w (2.4) 2w Px Py vl vr Here, vr and vl are the linear velocities of the right and left wheel respectively and 2w is the lateral wheelbase, as il- lustrated in the adjacent figure. For the omnidirectional mo- bile platform, a similar procedure can be applied: the velocity along the platform x-axis (y-axis) is computed as the average x-velocity (y-velocity) of the individual wheels. The yaw rate is the average of the yaw rates computed by grouping the wheel modules two by two: P ˙x = v1, x + v2, x + v3, x + v4, x 4 P ˙y = v1, y + v2, y + v3, y + v4, y 4 ˙ψ = 1 4 v1, x − v2, x 2w + v2, y − v3, y 2l + v4, x − v3, x 2w + v1, y − v4, y 2l (2.5) Here 2l and 2w are the distances separating the individual wheel modules as illustrated in Figure 2.1. For numerical implementation it is necessary to derive a discrete time version of these equations. Let ∆to designate the sample time interval of the odome- try and let us assume that the wheel velocities vi and the steering angle rates ˙ϕi are constant. We write the system of equations (2.5) at time step k:
  • 21. CHAPTER 2. KINEMATICAL MODEL 9 ∆P xk = 1 4 · 4 i=1 (λi · vi, x · ∆to ) ∆P yk = 1 4 · 4 i=1 (λi · vi, y · ∆to ) ∆ψk = 1 4 λ1 · v1, x · ∆to − λ2 · v2, x · ∆to 2w + λ4 · v4, x · ∆to − λ3 · v3, x · ∆to 2w + + λ2 · v2, y · ∆to − λ3 · v3, y · ∆to 2l + λ1 · v1, y · ∆to − λ4 · v4, y · ∆to 2l (2.6) with λi = sin ∆ϕi, k 2 ∆ϕi, k 2 The adjustment factor λi accounts for the fact that the path driven by the wheel during one sampling time interval is a circle arc and not a line segment. The steps leading from (2.5) to (2.6) are discussed in Section 2.2 for a similar set of equations. 2.1.3 Experimental Comparison of Both Solutions The redundancy resolution methods presented above are evaluated for different trajec- tories of the mobile platform. For a circular trajectory the end position and orientation of the platform are compared to the end position and orientation estimated by odom- etry data. The resulting pose errors is averaged on a series of ten runs. The estimated trajectory using both algorithms is shown in Figure 2.3a and Figure 2.3b on page 11 (dd: differential drive, lsq: least squares). The estimated platform orientation using both algorithms is shown in Figure 2.4a and Figure 2.4b on page 12, along with the orientation obtained by integrating the gyroscope measurements. It can be seen that the least squares solution approximates the gyroscope data better. The average pose error for ten runs on a circle of radius R = 1.75 m is given in Table 2.1a. It is clear that the least squares algorithm performs best on this type of trajectory. By monitoring the steering angle of each wheel it turned out that the trajectory was completed with an inadmissible wheel configuration (IWC). To ensure an admissible wheel configuration (AWC), a similar path was driven while setting the wheels to the required steering
  • 22. CHAPTER 2. KINEMATICAL MODEL 10 angles for the projected path without having to rely on the wheel steering angle con- trollers. The results are shown in Table 2.1b for a circle of radius R = 1 m. In this situation both algorithms yield equivalent results. Similar observations were made for straight line trajectories. It must be concluded that the least squares algorithm is more likely to provide better pose estimates than the differential drive algorithm in the case of non admissible wheel configurations, which are typically encountered on trajectories including curves and rapid changes in direction of motion. It should also be pointed out that it is planned to reduce the occurrence of inadmissible wheel configurations by using an improved control structure for the wheel modules. The subsequent developments in this thesis are based on the differential drive solution, as it allows an analytical analysis of the odometric uncertainty propagation. The advantages of the least squares solution tend to vanish anyway when sensor fusion is performed.
  • 23. CHAPTER 2. KINEMATICAL MODEL 11 −2 −1 0 1 2 3 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 x [m] y[m] ex : 0.094 m ey : 0.598 m Platform position in absolute coordinate system start position estimated end position observed end position estimated path (odometry) Figure 2.3a: Estimated trajectory using dd solution −2 −1 0 1 2 3 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 x [m] y[m] ex : 0.024 m ey : 0.327 m Platform position in absolute coordinate system start position estimated end position observed end position estimated path (odometry) Figure 2.3b: Estimated trajectory using lsq solution
  • 24. CHAPTER 2. KINEMATICAL MODEL 12 0 10 20 30 40 50 60 0 90 180 270 360 time [s] ψ[°] e ψ : 19.2° Platform orientation observed end orientation estimated end orientation gyro data Figure 2.4a: Estimated orientation using dd solution 0 10 20 30 40 50 60 0 90 180 270 360 time [s] ψ[°] e ψ : 10.6° Platform orientation observed end orientation estimated end orientation gyro data Figure 2.4b: Estimated orientation using lsq solution
  • 25. CHAPTER 2. KINEMATICAL MODEL 13 Method Direction elong [mm] elat [mm] eψ [◦ ] Mean counterclockwise 622 -96 19.8 clockwise 328 -47 -9.8 Least squares counterclockwise 350 -24 11.2 clockwise 97 -9 -2.4 Table 2.1a: Pose error using odometry (circle, R = 1.75 m), IWC Method Direction elong [mm] elat [mm] eψ [◦ ] Mean counterclockwise 57 -3 3.6 clockwise -26 3 1.5 Least squares counterclockwise 49 -2 3.1 clockwise -32 3 1.8 Table 2.1b: Pose error using odometry (circle, R = 1 m), AWC
  • 26. CHAPTER 2. KINEMATICAL MODEL 14 2.2 Absolute Position Updating In the following analysis, the time dependency of the motion variables is explicitly indicated. Considering the omnidirectional nature of the mobile platform, the mathe- matical model for the platform motion is given by the following equations: ˙x(t) = P ˙x(t) · cos(ψ(t)) − P ˙y(t) · sin(ψ(t)) ˙y(t) = P ˙x(t) · sin(ψ(t)) + P ˙y(t) · cos(ψ(t)) (2.7) Here x(t) and y(t) denote the absolute platform positions and ψ(t) denotes the platform orientation. Together, these three variables define what is called the platform pose. As in the preceding section, P ˙x(t) and P ˙y(t) designate the platform velocities in its own coordinate frame and are computed according to the system of equations (2.5). The general analytical solution to this system of differential equations is difficult to compute. It is thus assumed that both the translational and the angular velocities are constant over the integration period. In the following equations this is indicated by the omission of the time dependency for these variables. Note that the assumption of constant velocities is akin to saying that the platform moves on a trajectory of constant curvature with constant velocity in the integration interval. Under this assumption the solution of (2.7) is written as: x(t) = x0 + t 0 ˙x(t)dt = x0 + P ˙x ˙ψ sin(ψ0 + ˙ψ · t) − sin(ψ0) + P ˙y ˙ψ cos(ψ0 + ˙ψ · t) − cos(ψ0) y(t) = y0 + t 0 ˙y(t)dt = y0 − P ˙x ˙ψ cos(ψ0 + ˙ψ · t) − cos(ψ0) + P ˙y ˙ψ sin(ψ0 + ˙ψ · t) − sin(ψ0) ψ(t) = ψ0 + ˙ψ · t Using trigonometric identities we can rewrite this system of equations in the following
  • 27. CHAPTER 2. KINEMATICAL MODEL 15 way: x(t) = x0 + P ˙x · t ·   sin ˙ψt 2 ˙ψt 2   · cos ψ0 + ˙ψt 2 − P ˙y · t ·   sin ˙ψt 2 ˙ψt 2   · sin ψ0 + ˙ψt 2 y(t) = y0 + P ˙x · t ·   sin ˙ψt 2 ˙ψt 2   · sin ψ0 + ˙ψt 2 + P ˙y · t ·   sin ˙ψt 2 ˙ψt 2   · cos ψ0 + ˙ψt 2 ψ(t) = ψ0 + ˙ψ · t Note that the particular case of a straight line motion of the platform is included since lim x→0 sin x x = 1. For a numerical implementation of this model, we adopt a discretized version of the motion equation. For time step k the discrete time motion equation of the is given by: ∆xk = ∆P xk · sin ∆ψk 2 ∆ψk 2 · cos ψk + ∆ψk 2 − ∆P yk · sin ∆ψk 2 ∆ψk 2 · sin ψk + ∆ψk 2 ∆yk = ∆P xk · sin ∆ψk 2 ∆ψk 2 · sin ψk + ∆ψk 2 + ∆P yk · sin ∆ψk 2 ∆ψk 2 · cos ψk + ∆ψk 2 (2.8) In case the platform motion is computed using odometry alone the platform position increments ∆P xk, ∆P yk and ∆ψk are given by the system of equations (2.6). The numerical integration of (2.8) is performed using the Euler algorithm: xk+1 = xk + ∆xk yk+1 = yk + ∆yk ψk+1 = ψk + ∆ψk (2.9) The systems of equations (2.8) and (2.9) can be combined using vector notation: x k+1 = x k + Rabs, k · ∆P x k (2.10) with x k = xk yk ψk T ∆P x k = ∆P xk ∆P yk ∆ψk T
  • 28. CHAPTER 2. KINEMATICAL MODEL 16 Rabs, k =      cos ψk + ∆ψk 2 − sin ψk + ∆ψk 2 0 sin ψk + ∆ψk 2 cos ψk + ∆ψk 2 0 0 0 1     
  • 29. Chapter 3 Sensor Error Models In order to perform sensor fusion, information about the accuracy of the different sensors in use must be obtained. The measurement error Z can be considered a random variable and consists of an offset error eoffset and a zero mean noise part Enoise as described by the following equation: Z = eoffset + Enoise (3.1) The noise term is usually considered to follow a Gaussian probability distribution as indicated in Definition 6 in Appendix A, in which case it is referred to as white noise. In mathematical terms the mean is used to describe the systematic error induced by the measurement offset whereas the variance is used to describe the intensity of the noise process. The formulas for computing the mean and variance are given in Definition 1 and Definition 2 in Appendix A. In the case of white noise these parameters are sufficient to describe the characteristics of the random variable Z. The following sensors are used for the localization of the mobile platform: • encoders for the steering and revolution angles of each wheel • a gyroscope • a magnetic tracker 17
  • 30. CHAPTER 3. SENSOR ERROR MODELS 18 • a stereo camera The rest of this section deals with the modeling of the available sensors and an assess- ment of their individual accuracies. 3.1 Odometry Error Model The angular positions of the driving and the steering motors of each wheel module are measured by angular encoders fitted to the motor shafts. These encoders subdivide a complete shaft revolution into 500 periods. The gear transmission ratio is equal to idrive = 30 for the driving motor and isteer = 127.5 for the steering motor. One wheel revolution is thus subdivided into 15’000 and one steering revolution into 63’750 encoder periods. The encoder values are read out at a rate of 125 Hz. Because of the high transmission ratio and sampling rate the errors caused by limited encoder resolution and sampling rate are neglected in subsequent developments. According to [1], the following model holds for the traveled distance measured by the wheel encoders: ∆so i = 1 1 + ǫ · ∆si + ∆si, noise (3.2) where ∆si = vi · ∆to represents the actual distance traveled by wheel i. The first term on the right hand side of (3.2) takes into account the systematic errors induced by erroneous wheel radii, wheel misalignment and other systematic error sources. The second term on the right hand side takes into account non-systematic errors induced by travel on uneven floors, wheel slippage and other non-systematic error sources. The wheel steering angle measured by the encoder is best described by the following model: ϕo i = ϕi + ∆ϕi, mis (3.3) Here ϕi represents the actual steering angle of wheel i and ∆ϕi, mis is the misalignment
  • 31. CHAPTER 3. SENSOR ERROR MODELS 19 of the wheel module with respect to the platform longitudinal axis. Note that in model (3.3) it was assumed that the only error present is a systematic one. In fact the gear train between the steering motor input and the wheel steering angle output is considered to be perfectly rigid. Unlike the transmission between the driving motor and the floor where wheel slip can occur, the steering angle is not affected by any non systematic errors. In the case of the mobile platform considered here, the systematic errors ǫ and ∆ϕi, mis mentioned above were greatly reduced by calibrating the platform wheel radii and wheel steering angle offsets. These errors will thus not be treated separately. The only error taken into account in subsequent developments is the non systematic error in the wheel traveled distances. This is not to say that there are no systematic errors anymore. According to [2] systematic errors that cannot be described deterministically can be treated like random errors by using the second order moment of the error distribution instead of the variance. The definition of the second order moment is given in equation (A.1) in Appendix A. This procedure will be applied whenever the systematic error fails to be completely eliminated by suitable calibration. 3.1.1 Pose Uncertainty Model Assuming that the irregularities on the floor are independent in size, shape and location, the following statements can be made: 1. The error in the measured traveled distance of one wheel at sampling time k is statistically independent of the error in the measured traveled distance at sampling time k + 1. Next, the distribution of this error is assumed Gaussian. Although this assumption is not absolutely necessary for this model to be used in relationship with a Kalman Filter, it nevertheless simplifies matters. If it is assumed that the random error in the measured traveled distance of wheel i is proportional to the actual distance traveled by this wheel, the variance of the error in the traveled distance verifies the following relationship:
  • 32. CHAPTER 3. SENSOR ERROR MODELS 20 V o i = ¯V o · ∆si (3.4) where V o i is the error variance of the traveled distance measured by wheel module i and ¯V o is a proportionality coefficient termed the variance basis which will have to be determined experimentally. Its units are m2 /m. 2. The error in the measured traveled distances of the individual wheels are statis- tically independent of each other, and thus uncorrelated. The error covariance matrix Co ∆si of the measurement vector ∆s1 ∆s2 ∆s3 ∆s4 T can be as- sumed to be a diagonal matrix given by equation (3.5). Co ∆si = ¯V o ·         ∆s1 0 0 0 0 ∆s2 0 0 0 0 ∆s3 0 0 0 0 ∆s4         (3.5) To understand how the uncertainty in the traveled wheel distances translates into an uncertainty in the platform pose increments in its own coordinate frame, the error covariance matrix needs to be projected according to the following equation: Co ∆P x = J∆P x ∆si Co ∆si J∆P x ∆si T (3.6) where J∆P x ∆si is the Jacobian of the transformation relating wheel traveled distances to incremental platform poses in the platform coordinate frame according the system of equations (2.3) on page 7 or (2.6) on page 9: J∆P x ∆si =      ∂∆P x ∂∆s1 ∂∆P x ∂∆s2 ∂∆P x ∂∆s3 ∂∆P x ∂∆s4 ∂∆P y ∂∆s1 ∂∆P y ∂∆s2 ∂∆P y ∂∆s3 ∂∆P y ∂∆s4 ∂∆ψ ∂∆s1 ∂∆ψ ∂∆s2 ∂∆ψ ∂∆s3 ∂∆ψ ∂∆s4      (3.7) As the platform moves along, the uncertainty in the platform pose increments is prop- agated to yield a pose uncertainty in the absolute coordinate system. This is described by the following pose error covariance update equation: Po (t + ∆t) = J x(t+∆t) x(t) Po (t) J x(t+∆t) x(t) T + J x(t+∆t) ∆P x(t) Co ∆P x (t) J x(t+∆t) ∆P x(t) T (3.8)
  • 33. CHAPTER 3. SENSOR ERROR MODELS 21 with the pose error covariance matrix at time t Po (t) =      p11(t) p12(t) p13(t) p21(t) p22(t) p23(t) p31(t) p32(t) p33(t)      (3.9) and the Jacobian matrices J x(t+∆t) x(t) =      ∂x(t+∆t) ∂x(t) ∂x(t+∆t) ∂y(t) ∂x(t+∆t) ∂ψ(t) ∂y(t+∆t) ∂x(t) ∂y(t+∆t) ∂y(t) ∂y(t+∆t) ∂ψ(t) ∂ψ(t+∆t) ∂x(t) ∂ψ(t+∆t) ∂y(t) ∂ψ(t+∆t) ∂ψ(t)      , (3.10) J x(t+∆t) ∆P x(t) =      ∂x(t+∆t) ∂∆P x(t) ∂x(t+∆t) ∂∆P y(t) ∂x(t+∆t) ∂∆ψ(t) ∂y(t+∆t) ∂∆P x(t) ∂y(t+∆t) ∂∆P y(t) ∂y(t+∆t) ∂∆ψ(t) ∂ψ(t+∆t) ∂∆P x(t) ∂ψ(t+∆t) ∂∆P y(t) ∂ψ(t+∆t) ∂∆ψ(t)      (3.11) The Jacobians (3.10) and (3.11) can be computed using equations (2.8) and (2.9) on page 15. Equation (3.8) plays an important role in the Kalman Filter and is used to update the state uncertainty, i.e. the uncertainty in the estimated platform pose. More on this subject will follow in Chapter 4. Now that the arithmetics of the odometric uncertainty propagation have been es- tablished, a procedure for computing the odometric variance basis ¯V o can be proposed: 1. Using equation (3.8) an analytical expression for Po (t + ∆t) is computed using the kinematical model developed in Chapter 2. 2. By letting ∆t → 0, the differential equation governing the evolution of Po (t) is found. This matrix differential equation consists of multiple differential equations for the matrix entries pij(t). 3. An analytical expression for the pose error covariance matrix entries p11(t), p22(t) and p33(t) is found by solving the system of differential equations mentioned above. This solution is likely to exist only for cases involving very simple platform
  • 34. CHAPTER 3. SENSOR ERROR MODELS 22 trajectories (e.g. a straight line). These analytically computed position and orientation variances are then solved for the parameter ¯V o . 4. A series of experiments involving simple platform trajectories will yield an ap- proximate value for the position and orientation variances mentioned above and thus enable one to compute an approximate value for ¯V o . 5. By injecting ¯V o in a simulation model, it is possible to compare simulated and real position and orientation variances for different platform trajectories. The comparison of simulated and real variances will yield two important results: • is the discrete simulation model accurate enough to propagate the pose uncertainty (discretization error)? • acceptance or rejection of the proposed odometry model Note that the proposed procedure requires the kinematic model of the platform to be fairly simple to avoid troublesome analytical expressions. Therefore, the linear redundancy resolution scheme discussed in Section 2.1.2 will be used instead of the more accurate model proposed in Section 2.1.1. The arithmetics involved in the method presented are derived in [2] for the simpler case of a two wheeled differential drive robot. Here, we will simply present the results for the case of a four wheeled omnidirectional mobile robot with the proposed odometry model. 3.1.2 Analytical Expression for ¯V o The procedure described above was carried out for the case of the platform moving on a straight line trajectory along the platform longitudinal P x-axis. The analytical expression for the odometry variance basis is as follows:
  • 35. CHAPTER 3. SENSOR ERROR MODELS 23 ¯V o = 4 · p11 ∆s = 48 · w · p22 ∆s3 = 16 · w2 · p33 ∆s (3.12) with p11 : variance of platform position along platform P x-axis p22 : variance of platform position along platform P y-axis p33 : variance of platform orientation w : half the platform lateral wheel base according to Figure 2.1 ∆s : traveled distance of the platform center point P Some characteristics of the derived formulas should be noted: • the variance of the position along the platform longitudinal P x-axis and of the orientation is proportional to the traveled distance • the variance of the position along the platform lateral P y-axis is proportional to the third power of the traveled distance. • none of the variances includes any time dependency The second item reflects the fact that the lateral position uncertainty will be influ- enced by the orientation uncertainty. The third item means that the experimentally measured variances should be independent of the velocity with which the trajectory is driven. Note also that the platform orientation variance will decrease as w increases, which illustrates the stabilizing effect of a large wheel base. 3.1.3 Experimental Results and Analysis Straight line motion The mobile platform was driven along a straight line trajec- tory. There were three available tracks of 4000 mm each and one track of 8000 mm. To begin with, the commanded platform velocity was set to 0.2 m/s. For the shorter track,
  • 36. CHAPTER 3. SENSOR ERROR MODELS 24 a series of ten runs was distributed on the three available tracks. A series of ten runs was also performed on the long track. After each run, the exact position and orientation of the mobile platform was measured. Recorded data from the wheel modules were then used to estimate the performed trajectory according to the algorithm presented in Sec- tion 2.1.2 on page 8. The pose error was computed according to the following equation: ex = x m end − x o end = ex ey eψ T (3.13) where x m end is the actual (measured) end pose and xo end is the estimated end pose using odometry data. Note that the pose error is measured in the absolute coordinate frame. The units are [m] for the position error and [rad] for the orientation error. For each series of ten runs the sample average, variance, standard deviation and second order moment of the pose error were computed according to the formulas in Appendix A. The shorter tracks were also covered with velocities of 0.6 m/s and 1.0 m/s (ten runs each). The results are presented in Table 3.1 on page 25. Despite careful calibration of the wheel radii and steering angles to minimize the systematic pose errors, a substantial systematic error remains, as indicated by non-zero mean values of the position and orientation errors. This is why the variance basis ¯V o is computed using the second order moment m2 instead of the variance s2 . Table 3.1 also lists the entries of the main diagonal of the pose uncertainty matrix obtained by numerical simulation using odometry data and the variance basis ¯V o obtained from the lateral position error. These values are averaged over 3 simulation runs. The variance basis ¯V o obtained from the lateral position and orientation errors yields similar results for every set of measurements (1.065e − 04 < ¯V o < 1.436e − 04)1 . However, the variance basis computed with the longitudinal position error is up to one 1 The lower variance basis ¯V o for the 8000 mm track is partly due to the fact that all runs were performed on the same track, thus lowering the spread in the pose error.
  • 37. CHAPTER3.SENSORERRORMODELS25 Track length [mm] Velocity [m/s] ¯e x s2 (e x) s(e x) m2(e x) ¯V o Simulated variances 4000 0.2 -4.980e-03 1.236e-05 3.515e-03 3.593e-05 3.438e-05 1.334e-04 6.874e-02 6.996e-04 2.645e-02 5.355e-03 1.271e-04 5.366e-03 2.705e-02 9.078e-05 9.528e-03 8.132e-04 1.124e-04 9.192e-04 4000 0.6 -2.284e-03 1.843e-05 4.293e-03 2.180e-05 2.078e-05 1.514e-04 7.542e-02 4.110e-04 2.027e-02 6.057e-03 1.419e-04 6.043e-03 2.411e-02 2.555e-04 1.599e-02 8.110e-04 1.116e-04 1.030e-03 4000 1.0 -3.343e-03 2.242e-05 4.735e-03 3.136e-05 2.988e-05 1.512e-04 7.155e-02 1.120e-03 3.346e-02 6.127e-03 1.436e-04 6.026e-03 2.824e-02 1.849e-04 1.360e-02 9.641e-04 1.327e-04 1.033e-03 8000 0.2 1.365e-04 2.077e-05 4.558e-03 1.871e-05 9.350e-06 2.381e-04 1.753e-01 8.977e-04 2.996e-02 3.154e-02 1.065e-04 3.270e-02 4.265e-02 4.360e-05 6.603e-03 1.858e-03 1.341e-04 1.494e-03 Table 3.1: Experimental results for the determination of ¯V o (straight line motion)
  • 38. CHAPTER3.SENSORERRORMODELS26 counterclockwise clockwise all ¯e x,ccw s2 ccw(e x) ¯e x,cw s2 cw(e x) ¯e x m2(e x) Simulated variances (¯V o = 1.4 · 10−4 m2 m ) 5.675e-02 3.122e-04 -2.610e-02 1.782e-04 1.532e-02 2.135e-03 2.276e-03 -3.190e-03 2.483e-07 2.754e-03 7.292e-06 -2.180e-04 1.171e-05 6.894e-04 6.214e-02 3.417e-04 2.641e-02 9.613e-05 4.427e-02 2.444e-03 1.558e-03 Table 3.2: Experimental results for the determination of ¯V o (circular motion)
  • 39. CHAPTER 3. SENSOR ERROR MODELS 27 order of magnitude smaller (9.350e − 06 < ¯V o < 3.438e − 05). This results might indicate that the assumption of a diagonal error covariance matrix for the traveled distance of the wheel modules as expressed in equation (3.5) on page 20 does not hold true. Nevertheless, the model seems to describe the lateral position and orientation uncertainty relatively accurately, which is why it will not be rejected at this stage. Additionally, the results confirm that there is no significant dependence of ¯V o on the velocity with which the path is driven. Comparing the measured second order moment m2 of the position and orientation errors to the simulated variances, it can be seen that the measured second order mo- ment of the lateral position error is very well approximated by numerical simulation, which validates our analytical results and confirms that the discretization and roundoff errors in the numerical model are acceptable2 . The simulated orientation error vari- ance is also in acceptable proximity to the measured orientation second order moment m2. As expected the simulated longitudinal position error variance is much larger than than the measured second order moment. A typical simulated standard deviation in the longitudinal position error is approx. 13 mm as opposed to approx. 5 mm for the measurements. As a matter of consequence, the chosen strategy will systematically un- derestimate the longitudinal precision of the odometry data, while correctly weighting the lateral position and orientation uncertainty. The implemented odometry variance basis is: ¯V o = 1.4 · 10−4 m2 m Note that this value is approximately the mean of the values for ¯V o obtained from the lateral position error. The reason for this is that the platform position measure- ments are more reliable than the orientation measurements. Circular motion To verify that the proposed odometry model holds true for other trajectories, the platform was driven on a circle of radius R = 1 m with a velocity 2 The variance basis ¯V o used for simulation is exactly the one obtained from the respective lateral position error second order moment.
  • 40. CHAPTER 3. SENSOR ERROR MODELS 28 of 0.3 m/s (revolution time of 21 s). To avoid inadmissible wheel configurations, all wheel modules were preset to the required steering angle for the projected trajectory. Of the ten runs performed, five were in a counterclockwise and five in a clockwise sense. The results are shown in Table 3.2 on page 26. It can be seen that the sign of the average position error changes with the sense of rotation and the orientation error average also differs significantly. This is why one should again consider the second order moment m2 of the pose error instead of the sample variance s2 . A comparison of this quantity with the simulated pose error variance yields: • the longitudinal position uncertainty is very well approximated • the lateral position uncertainty is overestimated by a factor of almost 60 • the orientation uncertainty is underestimated by a factor of almost 1.5 These results confirm that the hypothesis of uncorrelated traveled distances of the individual wheel modules is not justified. In conclusion, one can say that this model in combination with the implemented value of ¯V o yields a conservative estimate of the pose uncertainty, which is either determined correctly or overestimated in most analyzed cases. It should be noted that the same trajectory driven with inadmissible wheel con- figurations (the steering angles are commanded by the controller) yields experimental results that match the proposed model even less. This was to be expected as the re- dundancy resolution algorithm (2.6) on page 9 is a gross approximation of the platform motion for inadmissible wheel configurations as noted in Section 2.1.3. 3.2 Gyroscope Error Model The optical fibre gyroscope (OFG) mounted on the mobile platform measures the platform yaw rate ˙ψ through optical interference. According to [1], the OFG is best
  • 41. CHAPTER 3. SENSOR ERROR MODELS 29 described by the following model: ˙ψg = Kg · ˙ψ · (cos α + δ1) + δ2 + δnoise (3.14) with ˙ψg : measured yaw rate Kg : scaling factor ˙ψ : effective platform yaw rate α : angle of platform yaw axis to sensor coil δ1 : factor related to the internal stochastic variation of the scaling factor δ2 : gyroscope drift error (offset) δnoise : zero mean white noise We will assume α = 0. Note that the distance separating the gyroscope from the platform yaw axis has no influence on the measured yaw rate since the device does not register translational motion. The orientation increment measured by the gyroscope during one sample time in- terval ∆tg is given by the following discrete time model: ∆ψg = Kg · ∆ψ + ∆ψg drift + ∆ψg noise (3.15) with ∆ψg : measured orientation increment Kg : scaling factor, cf. (3.14) ∆ψ : actual platform orientation increment ∆ψg drift : gyroscope drift error (offset) ∆ψg noise : zero mean white noise incl. the previously mentioned δ1 and δnoise The noise components at two different sample times ∆ψg noise(t1) and ∆ψg noise(t2) are assumed to be statistically independent. The noise variance is considered to be made
  • 42. CHAPTER 3. SENSOR ERROR MODELS 30 up of two different components: V g = ˙V g · ∆t + ¯V g · |∆ψ| (3.16) 1. The first term on the right hand side of (3.16) is independent from the yaw rate and increases linearly with time (time-based variance). 2. The second term is independent from time but increases linearly with the ori- entation increment during one sample interval (yaw rate-based variance). This part of the gyroscope uncertainty is due to the internal stochastic variation of the scaling factor Kg . 3.2.1 Systematic and Non Systematic Errors As with any measurement process, the gyroscope measurements are subject to system- atic and stochastic errors. Considering the model (3.15) combined with (3.16), we can classify the model parameters into one of these categories: 1. Systematic errors: Kg (if Kg = 1), ∆ψg drift. 2. Stochastic errors: ∆ψg noise consisting of a time-based noise and a yaw rate-based noise term As with the odometry, we will attempt to reduce the systematic errors to a minimum by calibrating the gyroscope. Subsequently, a series of measurements will enable us to determine the parameters of the gyroscope model. 3.2.2 Experimental Determination of ˙ψg drift and ˙V g Procedure The gyroscope data are recorded during nine hours while the gyroscope is standing still. Before the gyroscope reaches operating temperature, its drift and the noise corrupting the measurements are subject to a slight variation. This is why the data recorded during the first hour were discarded. The rest of the data was
  • 43. CHAPTER 3. SENSOR ERROR MODELS 31 divided into eight sequences of one hour each. For each sequence Si (i = 1, ..., 8), linear regression was performed according to the following model: Ωi = αi + βi · ti + εi (3.17) where Ωi is a random variable representing the measured yaw rate at sampling time ti and εi is a residue term for which εi is statistically independent of εj (i = j). The residue is again assumed to be zero mean white noise: εi ∼= N 0, σi 2 For each sequence, the following computations are performed: 1. The least squares estimators ˆαi, ˆβi, ˆσi 2 of the linear regression model parameters in (3.17) are computed, σi 2 being the variance of the residue term εi 2. If the gyroscope drift ˙ψg drift can be assumed constant for for this particular se- quence (i.e. ˆβ << 1) it is set equal to ˆα. The gyroscope time-based variance ˙V g is set equal to the residue variance estimator ˆσi 2 multiplied by the gyroscope sampling interval ∆tg (8 ms). The average value of the time-based noise variance ˙V g is computed over the eight sequences and this value is implemented in the existing simulation model. The drift is measured before each run performed with the mobile platform while it is standing still and averaged on a time period of 20 seconds. Results The values of the parameters of interest are given below: ˙ψg drift,min = −250.7 ◦ /h ωg drift,max = −4.6 ◦ /h ωg drift,avg = −160.9 ◦ /h ˆσ2 avg = 1.184 · 10−4 rad2 s2 ˙V g = 9.475 · 10−7 rad2 s2
  • 44. CHAPTER 3. SENSOR ERROR MODELS 32 −0.05 −0.04 −0.03 −0.02 −0.01 0 0.01 0.02 0.03 0.04 0.05 0 5 10 15 20 25 30 35 40 [rad/s] counts[−] Figure 3.1: Distribution of drift compensated gyro measurements ( ˙ψ = 0) Furthermore, it should be reminded that the distribution of the residue ε was as- sumed zero mean Gaussian. As can be seen from the distribution of the drift compen- sated gyroscope measurements in Figure 3.1, this assumption is not completely verified. The corresponding Gaussian probability distribution with the corresponding variance is plotted in red. However we will proceed with the assumption of white Gaussian noise as it is detailed enough for the purpose of sensor fusion. 3.2.3 Experimental Determination of Kg and ¯V g Determination of Kg In order to determine the value of the scaling factor Kg , a series of circular trajectories were driven (counterclockwise and clockwise sense of ro- tation). The gyroscope data were recorded and the platform orientation was measured after each run. For each run, the following factor was computed: Kg i = ∆ψg i ∆ψi (3.18) The average value on a series of ten runs yields an estimate of the real scaling factor. Another series of runs were driven at higher platform velocities to determine if the scaling factor is dependent on the yaw rate (Kg = Kg ( ˙ψ)).
  • 45. CHAPTER 3. SENSOR ERROR MODELS 33 | ˙ψ| = 0.16 rad s | ˙ψ| = 0.30 rad s counterclockwise 0.9734 0.9898 clockwise 0.9832 0.9874 average 0.9783 0.9886 Table 3.3: Average Kg for different yaw rates ˙ψ The experimental results are presented in Table 3.3. As can be seen, there is a slight dependency of Kg on the yaw rate ˙ψ. An average value of Kg = 0.9835 will be assumed and a correction factor of CF = 1.017 will be implemented to compensate for this systematic error. Determination of ¯V g Considering the gyroscope measurement noise model (3.16), the yaw rate-base variance term is given by the following equality: ¯V g = V g − ˙V g · ∆tg |∆ψ| (3.19) Again, a series of circular trajectory runs are performed and the orientation error of each run is computed according to the following equation: eψ = ψm end − ψg end (3.20) with ψm end : actual (measured) end orientation ψg end : end orientation according to gyroscope measurements The sample average and variance of the orientation error for a series of runs with different yaw rates yield an approximate value for the yaw rate-based variance ¯V g . The experimental results are presented in Table 3.4. The average of the orientation error lies within a tolerance band of 3seψ , which justifies the use of the orientation
  • 46. CHAPTER 3. SENSOR ERROR MODELS 34 ¯eψ [rad] s2 eψ [rad2 ] seψ [rad] ¯V g [rad2 rad ] ˙ψ = 0.16 rad s 0.0314 0.0057 0.0755 8.989 · 10−4 ˙ψ = 0.30 rad s 0.0077 0.0015 0.0389 2.374 · 10−4 Table 3.4: Value of ¯V g for different yaw rates ˙ψ error variance to compute ¯V g . The same is true if we separate the measurements into counterclockwise and clockwise series. Note that there is a significant difference between the variance basis computed for different yaw rates. The implemented value is chosen as: ¯V g = 5 · 10−4 rad2 rad 3.3 Magnetic Tracker Error Model The magnetic tracker used for the localization of the mobile platform tracks the mag- netic field of an emitter and determines its position and orientation in six degrees of freedom. The emitter is attached to the mobile platform and its two horizontal posi- tion coordinates (xt and yt ) are tracked for the purpose of this thesis. At this point it should be noted that the coordinate frame of the tracker position measurements is coincident with the absolute coordinate frame for perfect tracker measurements. 3.3.1 Systematic and Non Systematic Errors Similarly to the other sensors used, the magnetic tracker provides position measure- ments which are affected by systematic and non systematic errors. The systematic errors can be illustrated by a two dimensional graph mapping the lines of constant xt and yt coordinates onto the absolute coordinate system, as shown in Figure 3.2. This graph was produced by recording the position measurements of the magnetic tracker for a grid of equally spaced points. It is clearly visible that the position measurements
  • 47. CHAPTER 3. SENSOR ERROR MODELS 35 provided by the tracker are affected by systematic errors, thus generating a set of dis- torted isolines. This distortion is due to the presence of metal in the lab floor, walls and ceiling which influences the magnetic field received by the tracking device. The systematic error depends on the actual position of the emitter in the lab environment. An analytical description of this systematic error is however difficult to establish. Figure 3.2: Isolines of the magnetic tracker measurement coordinates In addition to the systematic errors in the tracker position measurements these are corrupted by non systematic errors. We again assume this error to be white noise. The aforementioned properties of the tracker position measurements can be summarized by the following equalities:
  • 48. CHAPTER 3. SENSOR ERROR MODELS 36 xt = xt yt T = T (x) (3.21) Xt ∼= N xt , CXt (3.22) where x is the actual absolute position and with CXt =   σ2 Xt ρσXt σY t ρσXt σY t σ2 Y t   (3.23) Note that in equation (3.22) the tracker position measurements are written in cap- ital letters where they represent random variables. The parameter ρ in the tracker covariance matrix (3.23) is a measure of the correlation between the random variables Xt and Y t . 3.3.2 Calibration In order to reduce the systematic errors in the tracker position measurements, it is necessary to implement the following nonlinear inverse transformation: x = x y T = T−1 xt (3.24) The resulting mean and covariance matrix of a position measurement calibrated using (3.24) is given in Theorem 1 in Appendix A where the nonlinear transformation g is to be replaced by T−1 . As already mentioned, an analytical expression for T−1 is difficult to find. The pro- cedure used to approximate this inverse transformation can be summarized as follows: 1. An equally spaced grid of points covering an area 5 m long by 5 m wide is defined inside the magnetic tracker operating range. The distance separating the grid points is 0.5 m on the edge of the area and 1m inside the area. The reason for this is that the quality of the position measurements gets worse with increasing distance from the receiver situated in the center of the area.
  • 49. CHAPTER 3. SENSOR ERROR MODELS 37 0 0.5 1 1.5 2 2.5 −1.5 −1 −0.5 0 0.5 1 x [m] y[m] tracker before calibration tracker after calibration actual path (best estimate) Figure 3.3: Comparison of tracker measurements before and after calibration 2. The position measurements of the tracker are recorded during 30 seconds for each of the 49 grid points, the measurement frequency being set to 33 Hz. The average and the covariance matrix of the position measurements are computed and stored in a database. Note that the covariance matrix (3.23) is approximated using the sample variances s2 Xt and s2 Y t . The maximum recorded sample standard deviation was sXt = 4.02 mm. 3. The inverse coordinate transformation (3.24) is approximated by creating a look- up table which provides the corrected tracker position measurement by linearly interpolating between the 49 mapped grid points. The resulting corrected position measurements are compared to the uncorrected ones in Figure 3.3 for a circular path of the mobile platform. It should be noted that there remains a slight systematic error in the tracker posi- tion measurements. This systematic error cannot be described deterministically. One of the reasons for this remaining error is of course the linear interpolation procedure
  • 50. CHAPTER 3. SENSOR ERROR MODELS 38 used. Another reason is the fact that the placement of the probe head on the different grid points during the mapping operation is affected by an inevitable positioning error. We assume that the uncertainty of the probe placement on the grid point xg is zero mean Gaussian with a covariance matrix CXg . The uncertainty of the magnetic tracker position measurements can be thought of as the sum of two normally distributed ran- dom variables whose variances add up. Strictly speaking this is not entirely correct, because one of the uncertainties in the position measurement is caused by a measure- ment noise process and the other one by a systematic misplacement of the probe head. During operation, the mobile platform will move across different regions of the mapped grid, thus transforming the consecutive systematic errors into something similar to a noise process. It should also be noted that the assumption made is equivalent to com- puting the covariance matrix by using the second order moment, as has already been done in preceding sections. To account for the problem mentioned above, a typical position uncertainty of a few millimeters for the noisy tracker position measurements was increased by an uncertainty of 3 cm in both coordinates. Furthermore, it should be pointed out that the position measurements of the mag- netic tracking device are received with a slight delay τ. Without countermeasure, the merging of the absolute position measurements of the tracker with the dead reckoning position estimates would produce a biased pose estimate. By comparing best position estimates (odometry fused with gyroscope) on a circular trajectory, the delay τ was estimated to be around 180 ms. In the perspective of a real time implementation of the localization algorithm, the delay of the tracker position measurements will have to be accounted for in the merging operation of the Extended Kalman Filter.
  • 51. Chapter 4 Sensor Fusion The process of sensor fusion allows to combine the strengths of different kinds of sensors by merging their information. The contribution of each sensor is weighted according to some criteria. In navigation problems this can be used to take advantage of the high sampling rate of dead reckoning sensors while bounding the growing pose uncertainty by using absolute position or orientation updates. In this thesis, the Extended Kalman Filter is used to obtain improved pose estimates by merging the information provided by multiple sensors. The Kalman Filter is a set of recursive equations whose purpose is to estimate the state of a system using a process model on the one hand and measurements on the other hand. The basic Kalman Filter cycle is shown in the schematic diagram below: a priori estimatea posteriori estimate Time update ('predict') Measurement update ('correct') 39
  • 52. CHAPTER 4. SENSOR FUSION 40 The state of the system is projected ahead using a process model. These equations are referred to as the predictor or time update equations. If any measurement becomes available the state is corrected proportionally to the discrepancy that exists between the intercepted measurement and the measurement predicted using the state variables updated by the process model. These equations are termed the corrector or measure- ment update equations. Section 4.1 treats the arithmetics of the Extended Kalman Filter in more detail. 4.1 The Extended Kalman Filter This section shall give a basic overview of the Extended Kalman Filter algorithm and is taken from [3]. The equations of the standard Kalman Filter have been established in the 1960’s for linear systems. These formulas were subsequently extended to deal with non linear processes and measurement relationships. This modified Kalman Filter is referred to as the Extended Kalman Filter (EKF) and linearizes the process about the current mean and covariance. In the subsequent developments of this chapter, vector quantities are not underlined for sake of clarity. The state vector of the process is termed x ∈ ℜn and the process is governed by the nonlinear stochastic difference equation xk+1 = f (xk, uk, wk) (4.1) with a measurement z ∈ ℜm that is zk = h (xk, vk) (4.2) where uk is any driving function and the random variables wk and vk represent the process and measurement noise respectively. They are assumed to be independent of each other, white and with normal probability distributions. p (w) ∼ N (0, Q) p (v) ∼ N(0, R)
  • 53. CHAPTER 4. SENSOR FUSION 41 Note that Q and R are covariance matrices. The state and the measurement vectors can be approximated as ˜xk+1 = f (ˆxk, uk, 0) (4.3) and ˜zk = h (˜xk, 0) (4.4) where ˆxk is some a posteriori estimate of the state (from a previous time step k). To estimate a process with non-linear difference and measurement relationships, we write new governing equations that linearize an estimate about (4.3) and (4.4) xk+1 = ˜xk+1 + A (xk − ˆxk) + Wwk, (4.5) zk = ˜zk + H (xk − ˜xk) + V vk (4.6) with the Jacobian matrices at time step k Aij = ∂fi ∂xj (ˆxk, uk, 0) , Wij = ∂fi ∂wj (ˆxk, uk, 0) , Hij = ∂hi ∂xj (˜xk, 0) , Vij = ∂hi ∂vj (˜xk, 0) . Note that the Kalman Filter is based on the assumption of white Gaussian noise with zero mean. Some modifications of the basic equations are necessary to deal with colored noise. It was already mentioned in Section 3.3.2 that a Gaussian distribution is not preserved under a nonlinear transformation. The EKF linearizes the process about the current mean and covariance, such that the random variables governing the corresponding error process can be assumed zero mean with a Gaussian probability distribution. The complete set of EKF equations is given in Table 4.1a and Table 4.1b. The time update equations project the state and covariance estimates from time step k to time step k +1. Ak and Wk are the process Jacobians and Qk is the process noise covariance
  • 54. CHAPTER 4. SENSOR FUSION 42 at time step k. The time update equations correspond to the compounding operation mentioned in [4]. The measurement update equations correct the state and covariance estimates with the measurement zk. Hk and Vk are the measurement Jacobians and Rk is the measurement noise covariance matrix at time step k. The measurement update equations correspond to the merging operation mentioned in [4]. ˆx− k+1 = f (ˆxk, uk, 0) (4.7) P− k+1 = AkPkAT k + WkQkWT k (4.8) Table 4.1a: EKF Time Update Equations Kk = P− k HT k HkP− k HT k + VkRkV T k −1 (4.9) ˆxk = ˆx− k + K zk − h ˆx− k , 0 (4.10) Pk = (I − KkHk) P− k (4.11) Table 4.1b: EKF Measurement Update Equations Note that the covariance projection equation in Table 4.1a has already been ob- tained when considering the uncertainty propagation of the odometry model in equa- tion (3.8) in Section 3.1.1. 4.2 Delayed-State Measurement Problem One of the first steps in designing a Kalman Filter is to choose the variables to be included in the state vector. In the case considered here a natural choice for the state variables are the position and orientation of the mobile platform: x k = xk yk ψk T (4.12) However, the measurements provided by the odometry and the gyroscope cannot be expressed as functions of these state variables. The measurement equation is of the
  • 55. CHAPTER 4. SENSOR FUSION 43 form: zk = h (xk, xk−1, vk) (4.13) This measurement equation does not fit the standard measurement equation (4.2) because of the occurrence of xk−1. This problem is common in navigation and is known as the delayed-state measurement problem and can be accounted for by modifying the Kalman Filter equations so as to accomodate the xk−1 term. This procedure is described in [5]. The approach chosen in the present work is to subdivide the fusion process into two separate steps. This is explained below and illustrated in Figure 4.1. 1. The yaw rate measured by the gyroscope is integrated over one time step to yield an orientation increment and the result is fused with the pose increment mea- surements of the odometry. The state vector of this process is chosen to contain position and orientation increments, such that the delayed-state measurement problem is avoided. 2. The position measurements of the tracker are fused with the pose estimate ob- tained by integrating the previously computed pose increments. The state vector of this process is chosen to contain the absolute platform pose according to (4.12). Note that the integration of stereo image processing data into the Kalman Filter structure is slightly more complicated and is explained in Chapter 5. In the following, the implementation of both subfilters will be discussed further. 4.3 Merging of Incremental Pose Estimates The state vector involved in the fusion of gyroscope measurements with odometry pose increments is chosen as: x k = P ∆xk P ∆yk ∆ψk T (4.14) Note that the reference frame of the position increments P ∆xk and P ∆yk is the platform coordinate system.
  • 56. CHAPTER 4. SENSOR FUSION 44 Time update (eq. 2.6) ∆si o, ϕi o Measurement update EKF1 (Dead reckoning) Time update (eq. 2.8/2.9) Measurement update EKF2 (Absolute pos.) ∆ψg ∆xk - ∆xk xt xk xk - Figure 4.1: Diagram showing the implemented Kalman Filter structure
  • 57. CHAPTER 4. SENSOR FUSION 45 Time Update The time update is performed according to the equations in Ta- ble 4.1a. The process model f (·) used to project the state ahead is given by the system of equations (2.6) on page 9. The system Jacobian matrices Ak and Wk are computed according to the expressions found in Section 4.1. It should be indicated that Ak is a zero matrix in this case, because x k+1 does not depend on x k. The covariance matrix of the process noise Qk is given in (3.5) on page 20 and represents the quality of the pose increment measurements using the proposed odometry model. Note the dependence of Qk on the odometry variance basis ¯V o . Measurement Update The measurement update is performed according to the equations in Table 4.1b. The measurement equation is given by: h(x k, v k) = ∆ψk + vk = H · xk + vk where H = 0 0 1 T and vk is the measurement noise. The measurement noise variance Rk is a scalar and is given by equation (3.16) on page 30. The term Vk is also a scalar and equals 1. Note that the measurement update equations are only run through if a gyroscope measurement is available. The measurement rate is 125 Hz, which approximately equals the frequency of the odometry measurements. 4.4 Including Absolute Position Updates The pose increments provided by the previous filter are integrated and the resulting ab- solute pose is fused with the position measurements supplied by the magnetic tracking device. In this case, the state vector is chosen as: x k = xk yk ψk T (4.15) Note that in this case the reference frame is the absolute coordinate frame.
  • 58. CHAPTER 4. SENSOR FUSION 46 Time Update The time update is performed according to the equations in Ta- ble 4.1a. The process model f (·) used to project the state ahead is given by the system of equations (2.9) in combination with (2.8) on page 15. Again, the system Jacobian matrices Ak and Wk are computed according to the expressions found in Sec- tion 4.1. The process noise covariance matrix is assembled by making the following considerations: 1. Since the matrix Qk represents the uncertainty propagated by the process, one of the components of Qk will be the state covariance matrix Pdr,k computed in the dead reckoning process upstream of this filter. 2. The system equations (2.8) and (2.9) are derived assuming constant velocity and yaw rate of the mobile platform. An additional uncertainty of the process is consequently the error resulting if this condition is violated. Possible values for the standard deviation in the position and orientation are: σξ,a = 1 2 aξ,max · ∆t2 (4.16) where a represents a linear or angular acceleration and ξ ∈ {x, y, ψ}. Recapitulating, the expression of Qk can be assembled as follows: Qk = Pdr,k +      σ2 x,a 0 0 0 σ2 y,a 0 0 0 σ2 ψ,a      (4.17) Measurement Update The measurement update is performed according to the equations in Table 4.1b. The measurement equation is given by: h(x k, v k) = xk yk T + v k = H · xk + vk where H =   1 0 0 0 1 0   and vk is the measurement noise.
  • 59. CHAPTER 4. SENSOR FUSION 47 The measurement noise covariance matrix is a 2 × 2 matrix whose entries are com- puted by interpolating the covariance matrices of the corresponding grid points mea- sured during the tracker mapping operation described in Section 3.3.2. The matrix Vk is the 2 × 2 identity matrix. The measurement update routine is run through only if a tracker position mea- surement is available. The measurement rate of the tracker is approximately 30 Hz, but lower update frequencies can be chosen as well. Note that the slight delay τ in the tracker measurements has been neglected in this work, but should be addressed in more detail if very precise positioning is required.
  • 60. Chapter 5 Improving Localization with a Stereo Camera 5.1 Approach A stereo camera offers the possibility to obtain depth images of the environment by comparing the images provided by its cameras (usually two or three) at the same point in time. Objects in the field of view can thus be localized with respect to the stereo camera. This is the subject of Section 5.5. By mounting such a device on the mobile platform it is possible to localize the platform in the environment provided the absolute position of the objects detected in a stereo image pair is known. In another approach used an a priori information of the absolute position of the objects detected is not indispensable. It consists of tracking these objects in a sequence of image pairs to periodically correct the pose increments provided by a dead reckoning algorithm. This is done by adapting the Extended Kalman Filter as explained in Section 5.8, after assessing the errors involved in stereo image processing in Section 5.6. In the framework of this thesis five artificial landmarks are attached to the walls inside the lab environment. These landmarks are used to assess the localization procedures described above. 48
  • 61. CHAPTER 5. IMPROVING LOCALIZATION WITH A STEREO CAMERA 49 5.2 Hardware Description In order to enhance the pose estimation, the mobile platform is equipped with a stereo camera manufactured by Point Grey Research. A picture of the camera can be seen in Figure 5.1. Some important specifications are given in the following table: Stereo Camera Specifications CCD chips two Sony ICX084 color CCD chips Image resolution 640 × 480 Frame rate 30 Hz Baseline 12 cm Focal length 2 mm Field of view approx. 100◦ Note that the camera is not operated in color mode for the purpose of this work. This reduces the amount of data to be transferred between the camera and the on-board computer. The captured images are gray scale intensity images. In the experiments performed in the framework of this thesis the camera is operated at a frame rate of approximately 2 Hz. 5.3 Depth Map Generation The principle of object localization using a stereo camera is a very familiar one: the same object will present a horizontal shift between the left and right images. The amount of shift depends on the distance between the object and the camera. This is illustrated in Figure 5.2. A disparity image or depth map can thus be established by computing the pixel shift in a pair of stereo images. Sub-pixel precision can be reached by comparing intensity values of corresponding pixels. The determination of the position of an object by analyzing a disparity image is termed the reconstruction process and the inverse task of determining where an object is mapped onto the image plane is termed the perception process. The present work focuses mostly on the recon-
  • 62. CHAPTER 5. IMPROVING LOCALIZATION WITH A STEREO CAMERA 50 12 cm Figure 5.1: Stereo camera (a) Left image (b) Right image Figure 5.2: Disparity in stereo images struction process, because it is the basis for localizing the platform with respect to its environment. 5.4 Coordinate Systems Before going into the description of the stereo camera error model it is necessary to define the reference coordinate systems involved. The task of localizing the mobile platform is a two dimensional problem. The determination of the vertical position of objects with respect to the platform does not provide additional information to be used for pose estimation. We will thus consider two dimensional coordinate systems in the reconstruction process, unless otherwise specified. There are three reference coordinate systems to be considered for the localization problem using stereo image processing:
  • 63. CHAPTER 5. IMPROVING LOCALIZATION WITH A STEREO CAMERA 51 1. The stereo camera coordinate system C, C x, C y 2. The mobile platform coordinate system P, P x, P y 3. The absolute coordinate system (O, x, y) As mentioned earlier, the absolute coordinate system corresponds to the coordinate system of the tracker position measurements. An illustration of the disposition of the coordinate systems mentioned above is shown in Figure 5.3. The position of the landmarks used for improved pose estimation is also indicated. O Cx Cy P x P y landmarks walls platform C P y x Figure 5.3: Coordinate systems in lab environment When analyzing the perception process, additional coordinate systems attached to the stereo camera will be considered. These are illustrated in Figure 5.4. The coordinate frame attached to each camera originates from the piercing point of each camera, called the optical center. The stereo camera coordinate frame C, C x, C y originates from the center of the baseline separating both cameras.
  • 64. CHAPTER 5. IMPROVING LOCALIZATION WITH A STEREO CAMERA 52 CR x C y C x CL y CL xC R C L C CR y Figure 5.4: Stereo camera coordinate systems 5.5 Object Localization In order to achieve improved pose estimation, the position of objects in the environ- ment must be determined. In Section 5.5.1 the process of mapping an object given by its two dimensional coordinates onto the camera image planes will be considered. This is the perception process mentioned earlier. In Section 5.5.2 the inverse process of localizing an object with respect to the mobile platform will be discussed. This is the aforementioned reconstruction process. The problem considered here is two di- mensional, i.e. the considered object is identified by its x and y coordinates in the environment. This simplifies both the perception and the reconstruction process. For an analysis of the equivalent three dimensional problem, the reader is referred to [6]. 5.5.1 Stereo Camera Model The problem of interest here is to understand how an object with position coordinates C x = C x C y T in the stereo camera coordinate frame is mapped onto the image planes of the left and right cameras. Notice that the term object refers to point features here, e.g. the center of gravity of the object surface as it is mapped onto the image plane. The situation is illustrated in Figure 5.5. For convenience, the image planes are represented in front of the optical centers situated at the same level as the camera lenses. In the following developments we assume perfect camera geometry. The epipolar lines of both cameras are perfectly aligned, i.e. their image planes are parallel. The mapping of the object onto the image planes is assumed to obey a pinhole camera model (no lens distortion). In the two dimensional case examined here, it is not necessary to consider
  • 65. CHAPTER 5. IMPROVING LOCALIZATION WITH A STEREO CAMERA 53 Cx(Cx,Cy) f CLC R CLC x RC x RC yLC y 2b ul ur pixels Figure 5.5: Stereo geometry the whole image plane. Since we are not interested in the vertical coordinate of the object, it suffices to consider the lines representing the pixel row where the object is mapped. The formulas for the horizontal image plane coordinates ul and ur according to a pinhole camera model are given by: ul = f · C x + f · b Cy ur = f · C x − f · b Cy (5.1) where f is the focal length of the camera and the baseline has length 2b. The computed image plane coordinates are given with respect to the image center point, which is the projection of the optical center onto the image plane. To obtain the pixel coordinates ul,M and ur,M in the computer memory frame, an additional transformation is necessary: ul,M = 1 dx · ul + Cx ur,M = 1 dx · ur + Cx (5.2) where dx is the distance separating two sensor elements of the CCD chip in x direction and Cx is the pixel coordinate of the image center in x direction. The pixel coordinates of the image center are obtained by the calibration procedure of the camera and are stored in a calibration file.
  • 66. CHAPTER 5. IMPROVING LOCALIZATION WITH A STEREO CAMERA 54 5.5.2 Triangulation Equations of the Reconstruction Process The formulas for the object coordinates are obtained by solving (5.1) for the object coordinates: C x = b · (ul + ur)/(ul − ur) C y = 2bf/(ul − ur) (5.3) In these formulas it is implicitly assumed that the transformation from pixel coordinates to image plane coordinates has already been done. 5.6 Systematic and Non Systematic Errors As for the other sensors considered in this work the localization using stereo image processing is affected by systematic and non systematic errors. The systematic error component is usually minimized by camera calibration. Although the camera calibra- tion parameters are specified by the manufacturer, an additional calibration procedure was able to reduce the systematic error in the reconstruction process even further. This procedure is described in Section 5.6.1. An analysis of the non systematic error encountered in stereo image processing and their effect on the reconstruction process is given in Section 5.6.2. 5.6.1 Camera Calibration Camera calibration is the procedure used to determine the values of intrinsic and extrinsic camera parameters in order to quantify the systematic errors in the mapping of points onto the camera CCD chip with respect to a pinhole camera model. For a description of the calibration parameters and an example of a calibration method, the reader is referred to [6]. The stereo camera used provides rectified images which have also been corrected for lens distortion and camera misalignment. The intrinsic camera calibration parameters
  • 67. CHAPTER 5. IMPROVING LOCALIZATION WITH A STEREO CAMERA 55 Cx, f and the extrinsic parameter b are listed in a calibration file. Despite this factory calibration the need for improved precision in the reconstruction process was derived from a series of measurements using a pattern of markers positioned in front of the camera at a well-defined distance. A comparison of the marker positions computed with the reconstruction formula (5.3) and the real marker positions revealed systematic errors of up to 20%. Clearly, these errors are too high to achieve increased precision in the localization of the mobile platform. In the problem considered here, it is possible to improve the precision of the re- construction process by an additional calibration. The condition for this is to ensure that the objects used for improved pose estimation always map onto the same region of the CCD chip. In the case of landmark detection studied here, this is achieved by adjusting the height of the camera to fit the height of the artificial landmarks. This ensures that the center point of the landmarks maps onto a narrow horizontal band of the CCD chip. Additionally, the vertical pixel coordinate of the landmark center point is independent of the distance separating the platform from the landmark. The additional calibration procedure is described below. Notice that the operations mentioned are performed for both cameras. 1. The stereo camera is placed at a well-defined distance with respect to a calibration pattern consisting of nine white markers on a black cardboard. The position C x of each marker with respect to the stereo camera is known. A picture of the setup is shown in Figure 5.6. 2. The horizontal pixel coordinate of the center point of each marker is extracted from a captured camera image by an appropriate algorithm written in Matlab code. The extracted center point pixel coordinates are compared to the pixel coordinates computed according to the pinhole camera model (5.1) and (5.2). Figure 5.7a shows an image of the calibration pattern as grabbed by the right camera. The theoretical marker center point pixel coordinates are illustrated together with the extracted ones. Subsequently a quadratic polynomial is fitted to the extracted center point pixel coordinates as shown in Figure 5.7b.
  • 68. CHAPTER 5. IMPROVING LOCALIZATION WITH A STEREO CAMERA 56 Stereo camera Figure 5.6: Camera calibration setup 3. A correction function is determined by inverting the quadratic equation obtained in the previous step. Each raw pixel coordinate extracted from an image is cor- rected using this function. The result of the correction applied to the calibration pattern is illustrated in Figure 5.8, where the reconstructed marker coordinates are plotted alongside the actual ones. It can be seen that the depth of the markers with respect to the camera is systematically overestimated by up to 15% before applying the additional calibration procedure. This systematic error is removed after applying the correction function.
  • 69. CHAPTER 5. IMPROVING LOCALIZATION WITH A STEREO CAMERA 57 (a) Ideal and extracted marker center points 0 160 320 480 640 0 160 320 480 640 Horizontal pixel coordinate u [−] Horizontalpixelcoordinate[−] u = 3.6e−005*u pinhole 2 + 0.98*u pinhole + 3.1 Pinhole camera model Measured points Quadratic Approximation (b) Correction function by curve fitting Figure 5.7: Comparison of ideal and extracted pixel coordinates in calibration pattern
  • 70. CHAPTER 5. IMPROVING LOCALIZATION WITH A STEREO CAMERA 58 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 1.3 1.5 1.7 Before additional calibration C x [m] C y[m] −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 1.3 1.5 1.7 C x [m] C y[m] After additional calibration reconstruction algorithm actual marker positions Figure 5.8: Reconstructed marker locations
  • 71. CHAPTER 5. IMPROVING LOCALIZATION WITH A STEREO CAMERA 59 5.6.2 Non Systematic Errors The perception process responsible for mapping objects in the environment onto the CCD chip of each camera is affected by random errors. Some of the reasons for these errors are listed below: • temperature dependent CCD noise • varying illumination conditions • spatial discretization error • intensity discretization error • calibration errors • coordinate transformation errors from (O, x, y) to C, C x, C y Though the last two error sources mentioned are systematic ones, the compounding of several such error sources tends towards a random distribution. The intensity dis- cretization error refers to the discretization of brightness into intensity values ranging from 0 to 255. The spatial discretization error designates the fact that an image is an array of pixels. The lower the resolution of the image, the higher the spatial dis- cretization error. The effect of the image discretization in pixels on the reconstruction process is illustrated in Figure 5.9. The tick marks on the image planes denote pixel boundaries, and the radiating lines extend these boundaries into space. The position of the point P can lie anywhere in the region enclosed by the extended pixel bound- aries. The further the object is from the camera, the more elongated the uncertainty region becomes. Note that it is also asymmetric and oriented. The uncertainty model developed below will attempt to approximate the size, shape and orientation of these regions by uncertainty ellipses. The uncertainty in the reconstruction of an object lo- cation is obtained by propagating the uncertainty in the horizontal pixel coordinate of each camera. This procedure is very similar to approaches adopted for other sensors in Chapter 3. We assume 1D, normally distributed error in the measured horizon- tal image coordinates and derive the 2D Gaussian distributions describing the error
  • 72. CHAPTER 5. IMPROVING LOCALIZATION WITH A STEREO CAMERA 60 P Left camera Right cameraLCx RCx RCyLCy ul ur pixels uncertainty ellipse Figure 5.9: Uncertainty in stereo image processing in the inferred 2D object coordinates. This procedure is described in [7] and will be recapitulated here. Let the horizontal image plane coordinates be given by Ul and Ur in the left and right image, respectively. Consider these as normally distributed random scalars with means ul = µ (Ul) and ur = µ (Ur) and variances V c l and V c r . The mean 2D position of point a point P is computed by inserting ul and ur in equations (5.3) and the covariance matrix is determined using the following linear approximation: CP = Jc ·   V c l 0 0 V c r   · Jc T (5.4) where Jc =   ∂C x ∂ul ∂C x ∂ur ∂C y ∂ul ∂Cy ∂ur   The matrix Jc is the Jacobian matrix of the system of triangulation equations (5.3). This uncertainty model considers the left and right image coordinates as uncorrelated. The values for the pixel variances V c l and V c r must be determined. To get an estimate of the random error originating from the first three ranom error sources mentioned on page 59, the calibration setup illustrated in Figure 5.6 is again
  • 73. CHAPTER 5. IMPROVING LOCALIZATION WITH A STEREO CAMERA 61 used. A total of 500 frames of the calibration pattern are captured, and the error variance of the extracted center point coordinates is determined for each of the nine markers. Here it is implicitly assumed that the algorithm extracting the center point coordinate of the markers is perfect and that the only error registered is due to the aforementioned random error sources. The maximum recorded variance amounts to su 2 = 2.666·10−3 pixel2 . However, as mentioned in [6], this image processing noise can be neglected compared to the remaining systematic errors in the perception process. In fact the calibration procedure minimizes the sum of the errors between the real camera and a pinhole camera model over the whole image plane (or a portion of it). There are consequently still systematic errors varying depending on the considered image region. To account for this, the value of the implemented pixel variance is conservatively increased to V c l = V c r = 1 pixel2 . The effect of the implemented pixel noise variance on the position estimation of objects in the environment is illustrated in Figure 5.10. The first picture shows the uncertainty ellipses surrounding two objects situated in front of the mobile platform equipped with the stereo camera. The size of the uncertainty regions was computed using equation (5.4). The procedure for computing ellipsoid dimensions from the corre- sponding covariance matrix is mentioned in [4]. The second picture is a closeup of the uncertainty regions of objects 1 and 2. The flock of points are the results of a simulation using the reconstruction algorithm (5.3) and corrupting the ideal pixel coordinates of the objects by a noise of standard deviation 1 pixel. It can be seen that the elongated shape of the uncertainty region predicted in Figure 5.9 is only partially described by the Gaussian model. The reason for this is the linear approximation used in (5.4). The real distribution is not Gaussian because of the non-linearity of the reconstruction process (5.3).
  • 74. CHAPTER 5. IMPROVING LOCALIZATION WITH A STEREO CAMERA 62 Figure 5.10: Uncertainty in object locations through stereo image processing
  • 75. CHAPTER 5. IMPROVING LOCALIZATION WITH A STEREO CAMERA 63 5.7 Motion Solving Algorithm The problem considered here is the following: Given the locations of a set of n points in the platform coordinate frame at time t and the locations of the same set of n points in the platform coor- dinate frame at time t′ , determine the motion parameters of the platform between t and t′ . In other words determine the translation vector t and the rotation matrix R describing the platform motion from instant t to instant t′ . This motion solving algorithm should not be computationally expensive, especially if it is to be included into the sensor fusion algorithm for improved pose estimation. In fact the landmark extraction process is already computationally expensive by itself. An efficient motion solving algorithm is found in [8] and is based on solving an Orthogonal Procrustes problem. More on the Orthogonal Procrustes problem can be found in [9]. The algorithm is summarized below. In the subsequent developments, the point locations are considered affected by measurement errors. 1. Let P = p 1 . . . p n be a matrix containing the n point locations in the platform coordinate frame at time t and P′ = p 1 ′ . . . p n ′ be a matrix containing the n corresponding point locations in the platform coordinate frame at time t′ . The relationship between P and P′ is given by the following equation: P = RP′ + tJT + ET with R : unknown rotation matrix t : unknown translation vector J : n × 1 vector of ones E : matrix of residuals
  • 76. CHAPTER 5. IMPROVING LOCALIZATION WITH A STEREO CAMERA 64 Transposing both sides of this equation yields: PT = P′ T RT + JtT + E After renaming some of the variables, the basic equation of the motion solving algorithm is obtained: B = ART + JtT + E (5.5) 2. Compute the column mean vectors of matrices A and B: ¯α = (AJ)/n ¯β = (BJ)/n These vectors are substracted from matrices A and B: A∗ = A − J ¯αT B∗ = B − J ¯β T The matrices A∗ and B∗ are input to a standard Procrustes subroutine to compute the unknown rotation matrix R. This subroutine is explained in more detail in the next step. 3. Compute the following matrix: S = A∗T B∗ Diagonalize ST S and SST using a standard single value decomposition routine: ST S = V DSV T SST = WDSWT V and W are orthogonal matrices containing the eigenvectors of ST S and SST . Next, the unknown rotation matrix is found by using V and W: ˆRT = WV T (5.6) When implementing this procedure, a problem might arise concerning the orien- tation of the eigenvectors contained in V and W. It might be necessary to reflect some of these eigenvectors to obtain an admissible rotation matrix R. More on this problem is found in [9].
  • 77. CHAPTER 5. IMPROVING LOCALIZATION WITH A STEREO CAMERA 65 4. An estimate of the matrix B can now be computed using ˆR, as well as the resid- ual matrix E: ˆB = A∗ ˆRT + J ¯β T E = B − ˆB Finally, the translation vector is given by: ˆt = B − A ˆRT T J n (5.7) Equations (5.6) and (5.7) allow us to infer the motion of the mobile platform between the instants t and t′ . The only iterative part of the algorithm involves taking the single value decomposition of a 2 × 2 matrix. It should also be pointed out that the estimated values ˆR and ˆt are optimal in a least squares sense. However, each of the object locations stored in the matrices A and B is equally weighted. An improved solution can be obtained if the Gaussian probability distribution of the object locations is taken into account. Let Ci and Ci ′ denote the covariance matrices corresponding to the position of point p i at time t and the corresponding point p ′ i at time t′ respectively. If n again denotes the number of points used to infer the platform motion, then the function to be optimitzed is: n i=1 ǫT i Wiǫi (5.8) with ǫi = pi − Rpi ′ − t and Wi = RCiRT + Ci ′ −1 . This problem can be solved iteratively with an optimization routine (e.g. the Gauss- Newton algorithm), using the previously computed ˆR and ˆt as an initial estimate. The obtained solution weights the point locations contained in A and B according to their uncertainty. A similar matrix-weighted least squares method can be found in [10]. The motion solving algorithm described above is used in this thesis to determine the initial and final pose of the mobile platform by detecting landmarks in the first and last image frames captured with the stereo camera. In this case the matrix B contains the absolute position of the detected landmarks and the matrix A the position of the detected landmarks measured by the stereo camera in the platform coordinate frame.
  • 78. CHAPTER 5. IMPROVING LOCALIZATION WITH A STEREO CAMERA 66 The absolute position of all available landmarks is precisely measured beforehand and stored in a database. The motion solving algorithm is used to infer the translation and rotation necessary to map the absolute coordinate frame onto the platform coordinate frame, thus yielding the position and orientation of the mobile platform with respect to the absolute coordinate frame. Note that in equation (5.8) the weight matrices Wi can be set equal to Wi = Ci ′−1 in this case, since the uncertainty of the precisely measured landmark positions in the absolute coordinate frame Ci is supposed to be negligible. The matrices Ci ′ can also be used to determine the uncertainty in the computed pose. In the case of the initial pose the inferred pose uncertainty will serve to initialize the state covariance matrix P0 in the time update equations of the Extended Kalman Filter (see Section 4.1). 5.8 Including Landmark Detection in the EKF In order to improve the localization of the mobile platform through image processing, the lab environment is equipped with five artificial landmarks distributed on two ad- jacent walls. The landmarks consist of white disks of approximately 15 cm in diameter on black cardboard. The high contrast between landmark and background makes the landmark detection particularly efficient. The arrangement of the landmarks can be seen in Figure 5.11 and is also illustrated in Figure 5.3. The absolute position of all landmarks is precisely measured and stored in a database. In order to include landmark detection in the sensor fusion algorithm, some new concepts must be introduced. These will enable us to use the Extended Kalman Filter algorithm to improve the pose estimation of the mobile platform. The procedure is described below and can be visualized in Figure 5.12. • Assuming that a total of n landmarks are available, define an augmented state vector consisting of the platform pose and the position of the landmarks in the platform coordinate frame: xa = x P l1 . . . P ln T (5.9)