SlideShare una empresa de Scribd logo
1 de 25
Descargar para leer sin conexión
Thomas Bak




Lecture Notes - Estimation and
Sensor Information Fusion
November 14, 2000




Aalborg University
Department of Control Engineering
Fredrik Bajers Vej 7C
DK-9220 Aalborg
Denmark
2        Table of Contents


Table of Contents




1.   Introduction         ¢¢¢¢¢¡£¡¢¡£¡¢¢¢¡£¡¢¡£¡¢¢¢¡£¡¢¡£¡¢¢¢¡£¡¢¡£¡¢¢¢¡¡ 
                                                                                                                           3

2.   Sensors      ¢¢¢¢¢¡£¡¢¡£¡¢¢¢¡£¡¢¡£¡¢¢¢¡£¡¢¡£¡¢¢¢¡£¡¢¡£¡¢¢¢¡£¡¢£¡ 
                                                                                                                           4
     2.1 Sensor Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .              4
     2.2 Multiple Sensor Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                    5

3.   Fusion      ¢¢¢¢¢¡£¡¢¡£¡¢¢¢¡£¡¢¡£¡¢¢¢¡£¡¢¡£¡¢¢¢¡£¡¢¡£¡¢¢¢¡£¡¢£¡¢ 
                                                                                                                           6
     3.1 Levels of Sensor Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                 6
     3.2 An I/O-Based Characterization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                       7
     3.3 Central versus Decentral Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                       7

4.   Multiple Model Estimation                      ¢¢¢¢¢¡£¡¢¡£¡¢¢¢¡£¡¢¡£¡¢¢¢¡£¡¢¡£¡¢¢ 
                                                                                                                          10
     4.1 Benefits of Multiple Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                   10
     4.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                12
     4.3 Methods of Multi-Model Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . .                          13
          4.3.1 Model Switching Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . .                           13
          4.3.2 The Model Detection Method . . . . . . . . . . . . . . . . . . . . . . . . . .                            14
          4.3.3 Multiple Hypothesis Testing . . . . . . . . . . . . . . . . . . . . . . . . . . .                         16
          4.3.4 Model Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                18
          4.3.5 Combined System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                     23
1. Introduction       3


1. Introduction




This note discusses multi-sensor fusion. Through sensor fusion we may combine
readings from different sensors, remove inconsistencies and combine the informa-
tion into one coherent structure. This kind of processing is a fundamental feature
of all animal and human navigation, where multiple information sources such as
vision, hearing and balance are combined to determine position and plan a path to a
goal.
    While the concept of data fusion is not new, the emergence of new sensors, ad-
vanced processing techniques, and improved processing hardware make real-time
fusion of data increasingly possible. Despite advances in electronic components,
however, developing data processing applications such as automatic guidance sys-
tems has proved difficult. Systems that are in direct contact and interact with the real
world, require reliable and accurate information about their environment. This in-
formation is acquired using sensors, that is devices that collect data about the world
around them.
    The ability of one isolated device to provide accurate reliable data of its envi-
ronment is extremely limited as the environment is usually not very well defined in
addition to sensors generally not being a very reliable interface.
    Sensor fusion seeks to overcome the drawbacks of current sensor technology by
combining information from many independent sources of limited accuracy and re-
liability to give information of better accuracy and reliability. This makes the system
less vulnerable to failures of a single component and generally provide more accu-
rate information. In addition several readings from the same sensor are combined,
making the system less sensitive to noise and anomalous observations.
4           2. Sensors


2. Sensors




2.1 Sensor Classification

Sensors may be classified into two groups: active or passive. Active sensors sends
a signal to the environment and measures the interaction between this signal and
the environment. Examples include sonar and radar. By contrast, passive sensors
simply record signals already present in the environment. Passive sensors include
most thermometers and cameras.
    Sensors may also be classified according to the medium used for measuring the
environment. This type of classification is related to the senses with which humans
are naturally endowed.
Electromagnetic sensors (Human vision).
    1. Charged Coupled Devices (CCD)
    2. Radio Detection and Ranging (RADAR)
    3. ect.
Sound wave sensors (Human hearing).
    1. Sound Navigation and Ranging (SONAR)
    2. ect.
Odor sensors (Human smell).
Touch sensors.
    1. Bumpers
    2. Light Emitting Diodes (LED)
    3. ect.
Proprioceptive sensors (Human balance). The senses that tell the body of its posi-
tion and movement are called proprioceptive senses. Examples:
    1.   Gyroscopes
    2.   Compass
    3.   GPS
    4.   ect.
2.2 Multiple Sensor Networks        5


2.2 Multiple Sensor Networks

Multiple sensor networks may be classified by how the sensors in the network inter-
act. Three classes are defined:
Complementary. Sensors are complementary when they do not depend on each
   other directly, but can be combined to give a more complete image of the en-
   vironment. Complementary data can often be fused by simply extending the
   limits of the sensors.
Competitive. Sensors are competitive when they provide independent measure-
   ments of the same information. They provide increased reliability and accuracy.
   Because competitive sensors are redundant, inconsistencies may arise between
   sensor readings, and care must be taken to combine the data in away that re-
   moves the uncertainties. When done properly, this kind of data fusion increases
   the robustness of the system.
Cooperative. Sensors are cooperative when they provide independent measure-
   ments, that when combined provide information that would not be available
   from any one sensor. Cooperative sensor networks take data from simple sen-
   sors and construct a new abstract sensor with data that does not resemble the
   readings from any one sensor.
     The major factors dictating the selection of sensors for fusion are: the compati-
bility of the sensors for deployment within the same environment, and the comple-
mentary nature of the information derived from the sensors. If the sensors were to
merely duplicate the information, the the fusion process would merely be the equiv-
alent of building in redundancy for the enhancement of reliability of the overall
system.
     On the other hand, the sensors should be complementary enough in terms of such
variables as data rates, field of view, range, sensitivity to make fusion of information
meaningful.
6       3. Fusion


3. Fusion




3.1 Levels of Sensor Fusion

Observational data may be combined, or fused, at a variety of levels from the raw
data level to a state vector (feature) level, or at the decision level. A common dif-
ferentiation is among high-level fusion, which fuses decisions, mid-level fusion,
which fuses parameters concerning features sensed locally, and low-level fusion,
which fuses the raw data from sensors. The higher the level of fusion, the smaller
the amount of information that must be processes.
    This hierarchy can be further detailed by classification according to input and
output data types, as seen in Figure 3.1.


                                     Processing              Processing
                            Data                  Features                    Decision




                            Input                                             Input
                        ¤                         Fusion                  ¤
                            Output                                            Output




                            Data                  Features                     Decision
                                    Processing               Processing

Fig. 3.1. Levels of sensor fusion. The fundamental levels of data-feature-decision are broken
down into a mode classification based on input-output relationship.


     An additional dimension to the fusion process, is temporal fusion, that is fusion
of data or information acquired over a period of time. The can occur on all levels
listed above and hence is viewed as a part of the the categorization above.
     The human brain is working on all levels in the categorization of Figure 3.1 as
well as in temporal fusion. An important characteristic of the brain is the decision
fusion because of the ability to take a global perspective. The machine is most ef-
ficient relative to humans at the data level because of its ability to process large
amounts of raw data in a short period of time.
3.2 An I/O-Based Characterization      7


3.2 An I/O-Based Characterization
By characterizing the three fundamental levels in Figure 3.1, data-feature-decision
by their input output relationship we get a number of input-output modes:
Data In-Data Out (DIDO). This is the most elementary for of fusion. Fusion
    paradigms in this category are generally based on techniques developed in the
    traditional signal processing domain. For example sensors measuring the same
    physical phenomena, such as two competitive sensors may be combined to iden-
    tify obstacles at each side of a robot.
Data In-Feature Out (DIFO). Here, data from different sensors are combined to
    extract some form of feature of the environment or a descriptor of the phe-
    nomenon under observation. Depth perception in humans, by combining the
    visual information from two eyes, can be seen as a classical example of this
    level of fusion.
Feature In-Feature Out (FIFO). In this stage of the hierarchy, both input and out-
    put are features. For example shape features from and imaging sensor may be
    combined with range information from a radar to provide a measure of the vol-
    umetric size of the object being studied.
Feature In-Decision Out (FIDO). Here, the inputs are features and the out of the
    fusion process is a decision such as a target class recognition. Most pattern
    recognition systems perform this kind of fusion. Feature vectors are classified
    based on a priori information to arrive at a class or decision about the pattern.
Decision In-Decision Out (DIDO). This is the last step in the hierarchy. Fusion at
    this level imply that decision are derived at a lower level, which in turn im-
    ply that the fusion process at previously discussed levels have also been per-
    formed. Examples of decision level fusion involves weighted methods (voting
    techniques) and inference.
    In practical problems, it is likely that fusion in many of the modes defined above
will be incorporated into the design to achieve optimal performance. The process of
extracting relevant information from the data, in terms of features and decisions, on
the one hand may be throwing away information, but on the other hand may also be
reducing the noise that degrade the quality of the ensuing decision.


3.3 Central versus Decentral Fusion
Centralized architectures assume that a single processor performs the entire data
fusion process. With the growing complexity of systems, centralized fusion is be-
coming less attractive due to the high communication load and reliability concerns.
An alternative is to perform a local estimation of states or parameters from avail-
able data followed by a fusion with other similar features to form a global estimate.
When done right, this assures a graceful degradation of the system as nodes fail.
    A fully is comprised of a number of nodes. Each nodes process sensor data
locally, validates them, and uses the observations to update its state estimate via
8        3. Fusion




               Sensor
                 A




                                                  Feature extraction
               Sensor




                                                                           Decision
                 B             Data level
                                fusion




               Sensor
                 N


                                            (a)




               Sensor
                 ¥               Extract
                 A              Features



               Sensor            Extract
                                                                                      Decision




                 B              Features              Feature level    ¦
                                                         fusion




               Sensor            Extract
                 N              Features


                                            (b)

Fig. 3.2. Alternate architectures for multi-sensor identity fusion. (a) Data level fusion, (b)
Feature level fusion.
3.3 Central versus Decentral Fusion        9


standard Kalman filtering. The nodes then communicates their results in terms of
state and possibly parameter estimates to other similar nodes that are then used
locally in the other nodes to update their local estimate in a second update. The
estimate available is now a global estimate and it may be verified to be the same as
would be achieved using a centralized architecture.
    The decentralized architecture is more resistant to node failures.


                                                           ^                       ^ ,P
                                                                                   xg g
                                                           x1, P1
       Sensor
                                                                    ©
                             Data                Local              Global
         1             §   validation
                                           ¨   estimator            update


                                                           ^
                                                           x2, P2                  ^ ,P
                                                                                   xg g
       Sensor                Data
                                                                    ©
                                                 Local              Global
         2                 validation      ¨   estimator            update
                                                                        
                                                                            Communication
                                                                           of results
                                                                            between nodes
                                                           ^
                                                           xn, Pn                  ^ ,P
                                                                                   xg g
       Sensor                Data
                                                                    ©
                                                 Local              Global
         N             §   validation      ¨   estimator            update


Fig. 3.3. Decentralized fusion of data. Local estimates of state and parameter is updated
locally by data from other fusion nodes achieving a global estimate.
10        4. Multiple Model Estimation


4. Multiple Model Estimation




This chapter extends the consideration of estimation theory to consider the use of
multiple process models. Multiple process models offer a number of important ad-
vantages over single model estimators.
    Multiple models allow a modular approach to be taken. Rather than develop
a single model which must be accurate for all possible types of behaviour by the
true system, a number of different models can be derived. Each model has its own
properties and, with an appropriate choice of a data fusion algorithm, it is possible
to span a larger model space.
    Four different strategies for multiple model estimation are examined in this
chapter
–    multiple model switching
–    multiple model detection
–    multiple hypothesis testing
–    multiple model fusion
    Although the details of each scheme is different, the three first all use funda-
mentally the same approach. The designer specifies a set of models. At any given
time only one of these models is correct and all the other models are incorrect. The
different strategies use different types of test to identify which model is correct and,
once this has been achieved, the information from all the other models is neglected.
    The latter strategy, model fusion utilize that process models are a source of in-
formation and their predictions can be viewed as the measurement from a virtual
sensor. Therefore, multiple model fusion can be cast in the same framework as
multisensor fusion and a Kalman update rule can be used to consistently fuse the
predictions of multiple process models together. This strategy has many important
benefits over the other approaches to multiple model management. It includes the
ability to exploit information about the differences in behaviour of each model. As
a result, the fusion is synergistic: the performance of the fused system can be better
than that of any individual model.


4.1 Benefits of Multiple Models
Increasing the complexity of a process model does not necessarily lead to a better
estimator. As the process model becomes more complicated, it makes more and
4.1 Benefits of Multiple Models   11


more assumptions about the behaviour of the true system. These are embodied as
an increasing number of constraints on situations in which the model fits the real
world. When the behaviour of the true system is consistent with the assumptions
in the model, its performance is very good. However, when the behaviour is not
consistent, performance can be significantly worse. Simply increasing the order of
the model without regard to this phenomena could lead to a very complicated, high
order model which works extremely well in very limited circumstances. In most
other situations, its performance could be poor.
    Multimodel estimation techniques are a method which addresses this underlying
problem. Rather than seek a single model which is sufficiently complicated and
flexible that it is consistent with all possible types of behaviour for the true system, a
number of low order approximate systems are developed. Each approximate system
is developed with a different assumptions about the behaviour of the true system.
Since these assumptions are different, the situations in which each model fits is
different.
    By ensuring a suitable complement over the range of operation of the system,
good performance can be achieved under all conditions. Although the accuracy of
a single model in a specific circumstance is lost, it is balanced by the range of
situations over which performance is good. The models are combined so that the
strengths of one covers for the weaknesses of the others. Figure 4.1 provides a rep-
resentation of this.



                                        State space of             Applicability of
                                        interest
                                                                   model 3




                           Applicability of
                           model 1

                                                  Applicability of
                                                  model 2



Fig. 4.1. The use of multiple models. Three different models have been derived with different
but overlapping domains. Although a single model is not capable of describing the entire
behaviour of the system, a suitable combination of all of the models can.
12            4. Multiple Model Estimation


4.2 Problem Formulation

Suppose a set of process models have been derived. Each uses a different set of
                              
assumptions about the true system, and the result is a set of approximate systems. 
The i’th approximate system is described using the process and observation models

                                   76RQPFGF7CA9764210)($ 
                                  # 5 3 ! I H # E D B@ 8 # 5 3 !  ' % # !                                   (4.1)
                                   $24`X¡CA9$AWV($ATS
                                  # ! Y H # B@ 8 # !   U % # !                                             (4.2)

   The state space of the approximate system is related to the true system state,                                   a $   ,
according to the structural equations
                                             # B A9$ a 0  cW  
                                                 @ 8 # !  b % # !

                                                                              1                     $! F2 d
                                                                                                   # e !
    The problem is to find the estimate of the variables of interest             with
covariance         $! F ihf
                  # e ! g g
                      which takes account of all of the different approximate models
such that the trace of              W! F2 ihf
                                   # e ! g g
                                  is minimized. The strategy is not limited to just
combining the estimates of the variables in an output stage, the estimate may also
be feed back to each of the approximate systems, directly affecting their estimates.
    This is reflected in the model management strategy which may be summarized
in terms of two sets of functions – the critical value output function
                                        u$2 ¢d 8 sG  9$¢E d rq$ d
                                        ## ! t        8 # !  p % # !                                            (4.3)

which combine the multiple model estimates and the approximate system update
function
                                        W  
                                       % # !       u$2 t  8 GG  8 $2 E 0  
                                                   ## !           # !                                            (4.4)

that updates the ’th model state. A successful multimodel management strategy
                          v
should have the following properties:
Consistency. The estimate should always be consistent2 . The estimates become in-
    consistent when the discrepancy between the estimated covariance and the true
    covariance is not positive semidefinite.
Performance. There should be a performance advantage over using a single pro-
    cess model. This justifies the extra complexity of using multimodels instead of
    a single model. These benefits include more accurate estimates and more robust
    solutions.
Theory. The method should have a firm theoretical foundation. Not only does this
    mean that the range of applicability is known, but it is also possible to generalize
    the technique to a wide range of applications.
     w
 †       The whole state space may not be of interest, and           „€ WWx
                                                                    ƒ ‚ €y    may thus be a subset of           „€ WQ…
                                                                                                               ƒ ‚ €y
         That is rj¡f A¡‘ u‰ ˜ —G¡‘ ud™—)”Q¡‘ uˆ‡ where
                k i h g ‚ ƒ ‚ e –… ƒ ‚ ‰ ˜ –… • “ ’ ƒ ‚ ‰ y              are state errors, and
                                                                           –…               g fare measure-
         ments up to time                       g ml
4.3 Methods of Multi-Model Estimation                       13


4.3 Methods of Multi-Model Estimation
The principle and practice of multiple model estimation has been used extensively
for many years. This section reviews various multimodel schemes.

4.3.1 Model Switching Methods

The simplest approach for estimation with multiple models is model switching. It
is simple to understand, computationally efficient, and readily facilitates the use of
many different models. The essential idea is shown in Figure 4.2.


                                                      Estimate 1
        Sensors                  Filter 1

                                                                                             o   Output estimate
                                                      Estimate 2                 Select
                                     Filter 2                               n   estimate




                                                      Estimate n
                                     Filter n


Fig. 4.2. The model switching method to estimation with multiple models. The output from
the bank of filters is that of the filter which uses an appropriate model for the given situation.


    Given that filters are implemented, one for each process model. All the fil-
                  
ters operate completely independently and in parallel with one another. They make
predictions, perform updates, and generate the estimates of the critical values of the
system. Combining the information from the different models is very simple. It is
                                 B@
assumed that, at any time , only one model is appropriate. The output from the
bank of filters is that of the filter which uses the appropriate model. The problem is
to identify the appropriate model. The output model management function is
                   V$ E d W E squ$2 t d 8 sG  9$ E d pp
                  H # !     # ! r % ## !           8 # !                          $ t d $ t r sG 
                                                                                  # !      # !                     (4.5)

where
                                         t
                                                 5   if model i is selected         8
                          W  r
                         % # !                                                                                     (4.6)
                                             u
                                                     if model i is not selected.

Since only one model is chosen at any time                          $21sr
                                                                   # !
                                                    sums to unity. The filters com-
bine information only for output purposes and their respective state spaces are un-
changed.
    The key design issue with this approach is to choose the switching logic which
determines the values of        $2  r
                               # !
                               which is dependent on the specific application. The
14      4. Multiple Model Estimation


landbased navigation of the Space Shuttle (Ewell 1988), for example, uses two dif-
ferent process models which correspond to accelerating and cruising flight. The
accelerating model is used when the acceleration exceeds a predefined limit. When
acceleration is below some critical level, the shuttle is assumed to be in cruise mode.
In these cases, complex gravitational and atmospheric models are incorporated into
the equations of motion. The threshold was selected using extensive empirical tests.
    Switching on the basis of measurements (or estimates) may lead to jitter if the
switching criterion is noisy and its mean value is near the switching threshold. A
more refined solution is to employ hysteresis switching which switches only when
there is a significant difference between the performance of the different models.
    Although the model switching method and its variants are simple to understand
and use, there are a significant number of problems. The most important is that the
choice of the switching logic is often arbitrary, and has to be selected empirically –
there is no theoretical justification that the switching logic is correct.

4.3.2 The Model Detection Method

The model detection approach is a more sophisticated version of the model switch-
ing method. Rather than switch on the basis of various ad hoc thresholds, it attempts
to detect which model is the least complex model (parsimonious ((Ljung 1987))),
which is capable of describing the most significant features of the true system. The
issue is a bias/variance tradeoff. As a model becomes more complex, it becomes a
better description of the true system, and the bias becomes progressively smaller.
However, a more complex model includes more states. Given that there is only a fi-
nite amount of noisy data, the result is that the information has to be spread between
the states and the variance on the estimates of all of the states increase.
     For example, if a vehicle is driving in a straight line a very simple model may
be used. However, when the vehicle turn, the effects of dynamics and slip may have
to be included. By ensuring that the least complex model is used at any given point
in time, the variance component of the bias/variance tradeoff is minimized so that
the model is not overly complicated for the maneuvers in question.
     The output model management function is of the same form as Equations (4.5)
and (4.6). No information is propagated from one model to the next and the approx-
imate system update is hence unity.
     One method for testing whether a model is parsimonious or not is to use model
reduction techniques. These examine how a high order model can be approximated
by a low order model. A crucial component for any model reduction scheme is the
ability to assess the relative importance that different states have on the behaviour
of the system. If a number of states have very little effect on output, then they can
be deleted without introducing substantial errors.
     To make the contributions from different modes clear, we turn to balanced re-
alizations (Silvermann, Shookoohi, and Dooren 1983). In balanced realizations a
linear transformation is applied to the state space of the system. In this form, the
contribution of different modes is made clear, and decision rules can be easily for-
mulated.
4.3 Methods of Multi-Model Estimation                                               15


   To calculate the balanced realizations for the linear system,
                                $)~W|{XWVu$yx7svX
                               # ! }# ! z H # ! # ! w % # 5 H !                                                                           (4.7)
                                         $2Vu$Px$™S
                                        # ! # ! € % # !                                                                                     (4.8)
its observability, , and controllability,
                      ‚ `                  , Grammians are evaluated across a
                                                                          ‚ „ƒ
window of length according to the equations
                         …
                                        B
                                    ˆ
           ‡ # … †! Q `
                  3 8 ! ‚                                    CW! ¡63 v 
                                                             €# 8 5                     # v Pu# v 
                                                                                             €                     $! ¢3 v 
                                                                                                                   # 8 5                        (4.9)
                                        WXΪ7Vd
                                       Ž  E ‹Š D B ‰                              Ž                           
                                                B
           ‡ # … †! Q „ƒ
                  3 8 ! ‚        8™!            ˆ            u# v |u763 v
                                                              z  z# 5                          #v                 T3 v ™2
                                                                                                                   # 5    8 !                  (4.10)
                                  WXΪ7Vd
                                  E ‹Š D B ‰                                               Ž         Ž 

where       18 v 
           # pis the state transition matrix from discrete step m to i.
                                                                                                                           $2 ‘
                                                                                                                           # !        W 4’
                                                                                                                                     # ! “
     The Grammians are diagonalized by finding the matrices           and                                                                        such
that
                                                                                                                       E ”D
                       W! 8 … † ‚ ƒ W! 8 … R ‚ 
                      % #      3 !      #     3 !                                ‘ $ “ ’ $2 ‘
                                                                                   # !     # !                                 $2
                                                                                                                              # !              (4.11)
If the linear transformation         $2 ‘
                                    # !
                               is applied to the state vector, the calculated Gram-
                         $ ’
                        # !
mians are diagonal and both equal to       which is the matrix of singular values.
This matrix is diagonal:
                                                     $ E
                                                    # !               u                                u           › œš
                                                                                    sG 
                                                                                                                       ›
                                                        u                  $2
                                                                          # !                          u               ›
                                                                                   sG   
                               —–$ ’
                                • % # !                           “                                                                            (4.12)
                             —                           .
                                                         .                .
                                                                          .        ..                      .
                                                                                                           .
                                            ™ u—˜        .                .             .                  .       
                                                     u                                                 $2 t
                                                                                                      # !
                                                                 sG  ™
                                                                                   sG 
                                                                                     
    The singular values reflect the significance that each state has for the behaviour
of the system (Silvermann, Shookoohi, and Dooren 1983). Both the observability                   ™
and controllability of the state are included in the measure. If a state is almost un-
observable, it has little effect on the measured output of the system. If it is almost
uncontrollable, the control inputs can exert little influence on its final value. If it is
both unobservable and uncontrollable, its value is not affected by the control inputs
and it does not change the output of the system.
Example 4.3.1. A vehicle dynamics model may include tire stiffness parameters in
the state vector in order to model the a number of physical parameters in the tire that
change over time. When the vehicle drives in a straight line the forces and slip angles
are small, rendering the tire stiffness parameters unobservable. In this situation, the
filter should switch to a lower order model without tire modeling.
    As the method above is only used for the switching rule, linearization is ac-
ceptable for nonlinear models, but the overhead associated with this method is still
significant. Moreover it may be difficult to evaluate the results.
    The two approaches which have been described so far use deterministic tests
to identify which model is appropriate. The alternative is to consider probabilistic
methods which are described next.
16      4. Multiple Model Estimation


4.3.3 Multiple Hypothesis Testing

Multiple Hypothesis Testing (Magill 1965) (MHT) is one of the most widely used
approaches to multimodel estimation. Rather than using bounds to define regions
of applicability, each model is considered to be a candidate for the true model. For
each model, the hypothesis that it is the true model is raised, and the probability that
the hypothesis is correct is evaluated using the observation sequence. Over time,
the most accurate model is assigned the largest probability and so dominates the
behaviour of the estimator.
    Figure 4.3 shows the structure of this method. Each model is implemented in
its own filter and the only interaction occurs in forming the output. The output is
a function of the estimates made by the different models as well as the probability
that each model is correct. The fused estimate is not used to refine the estimates
maintained in each model and so the approximate system update functions are unity.



        Sensors              Filter 1


                                                                                 ž                    Output estimate
                                                                                 Combine
                             Filter 2
                                                                             Ÿ   estimates




                             Filter n




                                             Probablity
                                         ¡   calculation

Fig. 4.3. The model hypothesis method. The output is a function of the estimates made by the
different models as well as the probability that each model is correct.


     The following two assumptions are made:
 1. The true model is one of those proposed.
                                                                  % @   u
 2. The same model has been in action since                                  .
From these two assumptions, a set of                  hypotheses are formed, one for each model.
The hypothesis for the ’th model is
                         v
                                V¢€
                               %       Model                   is correct                                            (4.13)

By assumption 1 there is no null hypothesis because the true model is one of models
in the set. The probability that model                  
                                           is correct at time , conditioned on the
                                                                                            B@
measurements up to that time,     B £
                                  is
4.3 Methods of Multi-Model Estimation             17

                                              # B £ e   XP$  ¤
                                                           ¥ ‡ # !                                       (4.14)
The initial probability that     is correct is given by
                                       ¦ r                  , which according to # u ¦ ¤
assumption 1 sum to unity over all models.
    The probability assigned to each model changes through time as more informa-
tion becomes available. Each filter predicts and updates according to the Kalman
filter equations. Consider the evolution of    from              ¦ ¤         †!
                                                                           5 3
                                                        to k. Using Bayes’ rule
                                                        # B £ e   X`§$  ¤
                                                                     ¥ ‡ # !
                                           # FD7B £ 8 #$2S e   X`%
                                             E            !          ¥
                     #sE”D7B £ e ¦  X#2  87EF7B £ G$2X¥
                                        ¥           D e # ! S
                                                                         %
                                   # E”DTB £ eG$!2X¥
                                               #  S
                                                     76†  ~#   8 ”7B £ s$™0h¥
                                                    # 5 3 ! ¤            E D e # ! S
                              %    ¨                                                                      (4.15)
                                             #GEF7B £ e ¦  h# ¦  7”TB £ G$2X¥ h‰ t
                                                 D           ¥     8 E D e # ! S E
                                                                                        ¦
The                                  #   8 F7B £ G$2X¥
                                             E D e # ! S
                                         #W!™S
                                              
                       is the probability that the observation      would be made
given that the model     is valid. This probability may be calculated directly from
                                  ¦r
        # T53R! eF!ª«3)#$!™S(q$2 9©
the innovations         S           ‡ # ! ¦ . Assuming that the innovation is Gaus-
    #7563†! eV ¬
              !
sian, zero mean and has covariance               the likelihood is
                                             5
                             E 176R! F21                         $  ­
                                                                  % # !                                   (4.16)
                           ± ## 5 3 e ! ¬              ²CV¡
                                                       ± °# ¯ ®                      œ„™—i´
                                                                                     º¹ ¸ · ¶
                         “                       G—F“
                                                µ ´ ³
                                           E FD            5
                     ¾u#$!  © 76R! F2  ¬ $2  © ¢½%
                                 # 5 3 e !         # !        3 ¼                                         (4.17)
                                                                     Œ»¸
                                                                     º¹
                                                        Ž ®
      p
where is the dimension of the innovation vector.
   The innovation covariance               $ ¬
                                          # !
                                 is maintained by the Kalman filter and given by
                         $  PVW  u$  f $2  x764! F2  ¬
                         # !      ¿ H # !   €# ! # !  € % # 5 3 e !                                       (4.18)
                                          Ž
                              WA ­
                             # !
   The MHT now works by recursion, first calculate                                            for each model. The
new probabilities are now given by
                                                                TR21FC$A ­
                                                               # 5 3 ! ¤# !
                                   q$1F¤
                                  % # !          ¨                                                      (4.19)
                                                           #756† ¦ C$ ¦ ­ X‰ t
                                                                3 ! ¤# !       E
                                                       ¦
The minimum mean squared error estimate is the expected value. The output func-
tion, i.e. the output of the multiple model is thus given by
                                   u$ t d 8 Gs  8 W E d cqW! F2 ª d
                                   Á# !           # !            À p % # e !                              (4.20)
                                                 Á B £ s$ d À s%
                                                       e # !            Â                                 (4.21)
                                                             t ˆ
                              Å~#¡B £ A  h$21 d
                                      e      ¥# !                 Äs%
                                                                   Ã Â                                    (4.22)
                                                           Xd
                                                          E ‰
                                                                    t ˆ
                                                   $2  ¤  ª d
                                                  # !                     %                               (4.23)
                                                                   Xd
                                                                  E ‰
18            4. Multiple Model Estimation


      The covariance in the estimate can be found to (Bar-Shalom and Fortmann 1988)
                     t ˆ
      $! V ihf
     % # e ! g g             u$! F Çh$! F  TiuW! F2 «)$! F21 TsFW! F2 ig 0 f À $21F¤
                            # # e ! ªd 3 # e ! ªd  # # e ! ªd 3 # e !  ªd  H # e ! g Æ # !       Á
                      Xd
                     E ‰                                                                          Ž
                                                                                                  (4.24)

which is a sum of probability weighted terms from each model. In addition to the
direct covariance from the models, a term is included that takes account for the fact
        ªd
that is not equal to .          d
    The MHT approach was developed initially for problems related to system iden-
tification – the dynamics of a nominal plant were specified, but there was some
uncertainty in the values of several key parameters. A bank of filters were imple-
mented, one for each filter candidate parameter value. Over time, one model ac-
quires a significantly greater probability than the other models, and it corresponds
to the model whose parameter values are most appropriate.
    The basic form of MHT has been widely employed in missile tracking where
different motion models correspond to different types of maneuvers (Bar-Shalom
and Fortmann 1988). Experience, however, shows that there are numerous practical
problems with applying MHT. The performance of the algorithm is dependent upon
a significant difference between the residual characteristics in the correct and the
mismatched filters, which is sometimes difficult to establish.
    The three methods we have reviewed until now each has a number of short-
comings, many of which stem from the fact that they use fundamentally the same
principle. The methods all at each instance in time assumed that only one model is
correct, and the multimodel fusion problem is to identify that model. However this
includes the extremely strong assumption that all the other models, no matter how
close they are to the current most appropriate model, provide no useful information
whatever.
    These shortcomings motivate the development of a new strategy for fusing infor-
mation from multiple models. This approach, known as model fusion, is described
next.

4.3.4 Model Fusion

The model fusion approach treats each process model as if it were a virtual sensor.
The ’th process model is equivalent to a sensor which measures the predicted value
       v
 7sÄ! VAWª 
# 5 3 e !   . The measurement is corrupted by noise (the prediction error) and the
same principles and techniques as in multi-sensor fusion may be used.
    Figure 4.4 illustrates the method for two approximate systems. Each model is
implemented in its own filter and, in the prediction step, each filter predicts the
future state of the system. Since each filter uses a different model and a different
state space, the predictions are different.
    Filter 1 (or 2) propagates its prediction (or some functions of it) to filter 2 (or 1)
which treats it as if it were an observation. Each filter then updates using prediction
and sensor information alike.
4.3 Methods of Multi-Model Estimation                                          19


                                                                               Estimate from filter 1
                                                                    Filter 1


                È                            Ë                                                                                      Ê   Output estimate
                    Sensors                      Prediction from filter 1                                        Batch
                                             Ë                                                              É   estimator
                                                 Prediction from filter 2



                                                                    Filter 2
                                                                                Estimate from filter 2

Fig. 4.4. Model fusion architecture for two approximate systems. Each filter predicts the
future state of the system using the same inputs, and updates using sensor information and
the prediction propagated from the other filter.


    The process model is used to predict the future state of the system. Using the
process model the prediction summarizes all the previous observation information.
                                 B C@
The estimate at is thus not restricted to using the information contained in       .                                                                       $ Ì
                                                                                                                                                          # !
The prediction allow temporal fusion with data from all past measurements.
    The fusion of data is best appreciated by considering the information form (May-
beck 1979) of the Kalman filter prediction. The amount of information maintained in
                         ª
an estimate is defined to be the inverse of its covariance matrix. With the Kalman
gain , the covariance prediction may be rewritten as
            Í
                       76†! V f WP€ Í hTR! F f $! F f
                      # 5 3 e !   # !   3 # 5 3 e !      % # e !                                                                                         (4.25)
                                                                     E FD
                                            $P€ Í ²«76R! V
                                           # !      3 5 % # 5 3 e !      f $! F2 f
                                                                            # e !                                                                         (4.26)

Using this result in combination with the Kalman gain in terms of the predicted
covariance, (Maybeck 1979)
                                                                                                                E FD
                                                               Í      %    $2PCW! F2 f
                                                                          # ! €# e !           $2¢¿
                                                                                               # !                                                       (4.27)
                                                                                           Ž
where            $¢¿
                # !          is the measurement covariance. The state measurement update may be
written as
                                                        E FD                                                                              E ”D
            $! Fhª 
           % # e !            f Ï$! F f
                                 Î # e !                            WP{V76†! VV£76R! V
                                                                   # !  € H # 5 3 e !  ª # 5 3 e !                         W¢¿
                                                                                                                             # !                ~$ Ì
                                                                                                                                                 Ð# !     (4.28)
                                                                                                                         Ž
A similar expression can be found for the predicted information matrix
                                                 E ”D                              E FD                                      E ”D
                                 W! F2 f
                                # e !                      %         76†! V f
                                                                    # 5 3 e !              ${PH
                                                                                          # ! €                  $¢¿
                                                                                                                 # !                $P€
                                                                                                                                    # !                  (4.29)
                                                                                                        Ž
    The estimate in Equation (4.28) is therefore a weighted average of the predic-
tion and the observation. Intuitively the weights must be inversely proportional to
their respective covariances. Seen in the light of Equation (4.28) predictions and
observations are treated equally by the Kalman filter. It is therefore possible to con-
sider the prediction from filter 1 (2) as an observation,              with covariance                                   7cy! FXª 
                                                                                                                       # 5 3 e !
    E ”D
f            76R! F
            # 5 3 e !
               that can be used to update filter 2 (1).
20      4. Multiple Model Estimation


Batch Estimator. This approach is fundamentally different from the other ap-
proaches which are described in this chapter. Rather than assume that only one
process model is correct, all the models are treated as valid descriptions of the true
system. At some points in time one model might be better than another, but this
does not mean that the poorer model yields no useful information. On the contrary,
since all the models are capable of making consistent predictions, they all provide
information.
    Each process model makes its own estimates of the interesting variables. The
different estimates can be fused together efficiently and rigorously using a batch
estimator (Bierman 1977). Define
                                š $2 E
                                   # ! d
                $ Ñ
               % # !         8  sG  ˜
                                                                                                                 (4.30)
                                   W t d •
                                  # !
                                         › œš
                   % Ò€         .
                                .     8                                                                          (4.31)
                                .
                              u•—˜
                               Ó

           $! F 9uig f
          # e ! Õ gÕ                                    $! F ACig f
                                                       # e ! Ö gÕ                            œš $! V
                                                                                            › # e !
                                                                                sG 
                                                                                          ›
         #$! F 9AÖ g f
               e ! Õ g
                         Ó
                                                      #$! F A1Ö g f
                                                           e ! Ö g                       › $! V × Cig f gÕ
                                                                                                # e ! × 1Ö g f
                                                                               sG                       g
                                —–$! F X)f
                                 • % # e ! Ô Ô                                                                   (4.32)
                              —           .
                                          .                    .
                                                               .               ..                   .
                                                                                                    .
                           —˜             .                    .                    .               .       
        #$! F2 Õ g × Xf
              e !      g                               $! F2 Ö g × Xf
                                                      # e !         g          sG 
                                                                                          $! V × g × Xf
                                                                                         # e !         g
the estimates of interest are then given by
                                                                              E ”DE FD
                             §$! F Gg f
                            % # e ! g                              Á u$! F
                                                                     €# e !              Ø Ù€ À                  (4.33)
                                                                                Ô )Ô f ”D
                                                                                       E
                           %$ d
                              # !                #W! Ñ ØÙ€C#W! eF2 gighf #$! F2
                                                                   !              e !                            (4.34)
                                                                                           Ô )Ô f
     There are several advantages of this method. First, it is not necessary to assume
that the real physical system switches from mode to mode, nor is it necessary to
develop a rule to decide which mode should be used. Rather, the information is
incorporated using the Kalman filter update equations.
     Suppose that filter 1 is the correct model, characterized by the fact that its (con-
sistent) prediction has a smaller covariance than that of filter 2. Filter 2 then weights
the predictions from filter 1 very highly in its update. Conversely, filter 1 weights the
predictions from filter 2 by a very small fraction. The result is that the information
which is most correct is given the greatest weight. This method is also capable of
exploiting additional information which none of the other filters used.
The Structure of Prediction Propagation. In general, the filters cannot directly
propagate and fuse the entire prediction vectors with one another. The reason is that
different models use different state spaces. One model might possess states which
the other models do not have.
     Conversely, two models might have states which are related to one another
through a transformation (for example two filters might estimate the absolute po-
sition of two different points rigidly fixed to the same vehicle). Only information
which can be expressed in a common space can be propagated between each filter.
4.3 Methods of Multi-Model Estimation                    21


    A model translator function   u$2 E 0 41E ‘
                                  ## !  E Ú   propagates the prediction from filter
                                            “
1 into a state space which is common to both filters. Similarly, the prediction from
filter 2 is propagated using its own translator function. The fusion space for both
filters consists of parameters which are the same in both filters. These parameters
obey the condition that
                            §1$ i 4Ú ‘
                           % # # ! Û E                                    1$ E i 41E ‘
                                                                          # # ! Û E Ú         (4.35)
                                   “ “  “                                             “
for all choices of the true state space vector #u#$!2qa$u¢bÝ%Ü#W!AWÛ 
                                                          and                        WqW
                                                                                    # ! a   . Candi-
date quantities include
– States which are common to both models (such as the position of the same point
  measured in the same coordinate system).
– The predicted observations.
– The predicted variables (states of interest).
However, it is important to stress that many states do not obey this condition. This
is because many states are lumped parameters: their value reflects the net effects
of many different physical processes. Since different approximate systems use dif-
ferent assumptions, different physical processes are lumped together in different
parameters.
    Each filter treats the propagation from the other filter as a type of observation
which is appended to the observation state space. Therefore, the observation vector
which is received by filter 1 is
                                W™S
                               # !
                                           àWÞ¡S
                                          ß % # ! E                                           (4.36)
                                  0 †Ú ‘
                                  E                   ~uW
                                                       á## !
                              “ “      “
The model fusion is a straightforward procedure. For each pair of models find a
common state space in which the condition of Equation (4.35) hold. The observation
space for each filter is augmented appropriately, and the application of the filter
follows trivially.
    The model fusion system relies on the Kalman filter for its information propa-
gation, and we will hence have to take account of that the prediction errors in the
different models are correlated with one another in order to get consistent estimates.
    Figure 4.5 shows the information flows which occur when two models are fused.
Information can be classified in two forms: information which flows from the out-
side (and is independent) and that which flows within (the predictions which prop-
agate between the filters). The external flows are indicated by dashed lines, internal
flows by solid ones. Both flows play a key role in the prediction and the update. In
the prediction step each filter uses its own process model to estimate the future state
of the system.
    The process noises experienced by each model are not independent for two rea-
sons. First, there is a component due to the true system process noise. Second, the
modeling error terms for each filter are evaluated about the true system state. Since
the process noises are correlated, the prediction errors are also correlated. Each filter
22      4. Multiple Model Estimation


                                                       Prediction



     Process                         Process
                                                                             Update
     noise 1                         model 1


                                                       Estimate

                                                                                                      Sensor data

                                                       Prediction



     Process                         Process
                                                                             Update
     noise 2                         model 2


                                                       Estimate



Fig. 4.5. Information flow in the fused two-model case. The external (and independent) flows
are indicated by dashed lines, internal flows (the predictions) by solid ones.


updates using its own prediction, the prediction from the other filter, and the sensor
information.
    Since the process noises are correlated, the prediction errors committed by both
models                     â                                â
                       #7563R E I`HV#T534! e»563R! E â  E x764! F2 E â 
                             !                                  w % # 5 3 e !                                 (4.37)
                                   x764! F2
                                   w % # 5 3 e !                    `VT4! »6R
                                                                    I H # 5 3 e 5 3 !        76R
                                                                                            # 5 3 !           (4.38)
                               “                   “            “                       “
are also correlated. The cross correlation is
                 CT4! »6R E f E sT†! F E f
                 w# 5 3 e 5 3 !    w % # 5 3 e !                                       T†W® E H
                                                                                      # 5 3 !  ã             (4.39)
                                “                “                               Ž“
where  E ã  is the cross correlation matrix between the two process models.
             “
    As can be seen, the cross-correlations between the prediction errors evolve in
a similar fashion to the covariance of each state estimate. First, the diffusion step
scales the cross correlation, second, the correlation is reinforced by adding process
noise correlation.
    More insight is gained by looking at a Taylor series expansion of the measure-
ment vector given in Equation 4.36.
                                      G”TB £ e Á 7642¡Q À T†ph$™S
                                     ¾ E D       # 5 3 ! E         U ¼ Â 3 # !
                                                                                    å$¢E ©
                                                                                   ä % # !                    (4.40)
                                       # ! E     Eâ Ú
                                  æiÁ W¢™ À 4uE ‘ 3 Á $  À 4Ú ‘
                                                               # !          E
                           â #76R! F “ E  $ E €                  â“ “       “
                                    5 3 e !           # !
                                                                                    ß ç%                       (4.41)
       á Á #7563†! eF!2 E  À †uE ‘ è 3 Á T†! F  À 4Ú ‘ è
                                 E Ú                 # 5 3 e !            E
                               “                                   “ “        “
where           represents the Jacobian, i.e. a linearization about the current state.      41E ‘ è
                                                                                           E Ú
                                                                                         “
Taking outer products leads to the following cross covariance equation
4.3 Methods of Multi-Model Estimation                                 23

                    $2 Ø E u76R! VÞAE f
                   # !        €# 5 3 e ! E
  á 4Ú ‘    è TR! F E                      è TR! F2ÞAE ç764! F2 1uÕ hf
                                                # 5 3 e ! E f
                                                               ß % # 5 3 e ! Õê é                                 (4.42)
      E       # 5 3 e !          f 3 †uE ‘
                                       E Ú
    “   “ Ž                 “        “      Ž
which shows how information flows between the two filters. The first component de-
scribes how the observation information is injected. The second component, which
determined the weight placed on the prediction propagated from filter 2, is the dif-
ference between the covariance of filter 1, and the crosscorrelation between filters 1
and 2 all projected into the same space.
    In effect only the information which is unique to filter 2 is propagated to filter
1. As the two filters become more and more similar to one another, they become
more and more tightly correlated. Less information is propagated from filter 2 and,
in the limit when both filters use exactly the same process models, no information
is passed between them at all. Model fusion does not invent information over and
above what can be obtained from the sensors. Rather, it is a different way to exploit
the observations. In the other limit, as the models become less and less correlated
with one another, the predictions contain more and more independent information
and so is assigned a progressively greater weight. The update enforces the corre-
lation between the filters through the fact that the same observations are used to
update both filters with the same observation noises.
    The rather complex cross correlations in the observation noise may be simplified
by treating the network of filters and cross correlation filters as components of a
single, all encompassing system or combined system.

4.3.5 Combined System
The combined system is formed by stacking the state spaces of each approximate
system. The combined state space vector is               ë 
                                                             œš $ E 
                                                            › # !
                                                          ›
                                                         › $   # !
                                        WÞW
                                       % # ! ë                       “ •—                                        (4.43)
                                                                    .
                                                                    .         —
                                                                   .      —˜
                                                                  $2†™
                                                                 # ! ì

The dimension of    ë 
                     is the sum of the dimensions of the subsystem filters. The
covariance of the combined system is
                              W! F2 AE f
                             # e ! E                $! V E f
                                                   # e !                                       œš W! F2 E
                                                                                              › # e !
                                                                                            ›
                            #W! F2 E f
                                 e !              #$! V “ f
                                                       e !
                                                                                    Gs¹
                                                                                   ¹ ¹     › W! F2 t f
                                                                                                   # e ! t f
               $! FTë f
              % # e !                 “ •—                “ 1“                   ¹Gs¹ ¹                     “
                            —         .                         .                 ..               .              (4.44)
                            —˜        .
                                      .                         .
                                                                .                      .          .
                                                                                                   .
                                  $! V¢E Xf
                                 # e ! t         t hf           $! F
                                                                # e !                                $! F ¡hf
                                                                                                    # e ! t t
                                                          “                        Gs¹
                                                                                  ¹ ¹
The diagonal subblocks are the covariances of the subsystems, and the off diagonal
subblocks are the correlations between them.
    The combined system evolves over time according to a process model which is
corrupted by process noise. These are given by stacking the components from each
subsystem:
To be continued ....
                                                           “             “
                                            Á 76† 
                                               # 5 “ 3 !                    U 3 “# !  “
                                                                    À phW™S “ •
(4.49)                            Á W¢™ À 4uE ‘
                                     # ! E          E Ú              3 Á $  À 4Ú ‘ ˜
                                                                           # !           E              % # !
                                                                                                         $27ë ©
                                 š           Á 76†¢™
                                                # 5 3 ! E           À QphW™S
                                                                        E U 3 # !
innovation vector becomes
                                                                                                                                 “
               apart from a sign. Eliminating the redundant terms, the combined                                        # !
                                                                                                                     Á $2 E  À †uE ‘
                                                                                                                                   E Ú
            “       “   “
Both innovation vectors have the same model fusion component
  3 Á $
      # !        À 4Ú ‘
                      E
                                               “ “           “                      “
                                          # !
                                     á Á W  À 4Ú    E         ‘ 3 Á $ E  À 41E  E Ú     ‘               “
(4.48)                                                              “ “ # !                       ß     % # !
                                                                                                         §$ ©
                                                Á 76R
                                                  # 5 3 !             À phW™S
                                                                           U 3 # !
                                                     “                         “ “        “
                                           # !
                                      á Á W E  À 4uE  E Ú              # !
                                                                  ‘ 3 Á $  À 4Ú     E      ‘
(4.47)                                                                                            ß     % # !
                                                                                                         §$ E ©
                                                Á# 5 3 !
                                                C76R                 U 3 # !
                                                                   E 2À E phW™S
innovations for the two approximate systems are
novation vectors for the two system case. Acting independently of one another, the
which must be eliminated. This problem can be illustrated by considering the in-
servation vectors from the individual subsystems can introduce redundant terms
yield the combined system’s observation vector. However, simply stacking the ob-
    The observation vectors from the individual subsystems are also aggregated to
                                                                           “
                                     # ! t
                                      W ¡t ã         .               !
                                                                   #$2 t ñ$ E t ãã # !
                                                           ..                                îï
                             ô           .             .               .               .        î
(4.46)                                   .                 ..          .               .          î
                                         .
                                         “
                                                                       .
                                                                             “ A“
                                                                                       .            íî % # !
                                                                                           “ å§$2 ë ã
                            ó # ! t                     ¹sG¹ ¹
                                                                         !        ã # !
                                                                    #$2 “ ðW E ã
                             ó $2 t ã
                              ó                          ¹ ¹
                                                          sG¹
                                ó # !
                                 iò $2 E ã                          # !
                                                                      $2 E ðW AE ã
                                                                                   ã # ! E
                                                                                                      The associated covariance is
                                  @ 8 # ! I # ! } # !     
                              # B A9$ t 18 W)18 W t  t '
                                                  .
                                                   .             —˜
(4.45)                                             .                —
                                            “            “ “ •—                   %      @ 8 # ! I # ! } # e ! 
                                                                                    §# B A9$ ë A8 $2hA8 $! F ë 0 ë '
                                       # ! I # ! } # ! 
                      › # B @ 8 $2 A8 $2hA8 $2 0 '
                       ›                # ! I # ! } # ! 
                         › š # B @ 8 $2 E A8 $2hA8 $2 E 0 E '
                                                                                    4. Multiple Model Estimation                   24
REFERENCES          25


References




 Bar-Shalom, Y. and T. E. Fortmann (1988). Tracking and Data Assocations. The
     Academic Press.
 Bierman, G. J. (1977). Factorization Methods for Discrete Sequential Estimation,
     Volume 128 of Mathematics in Science and Engineering. Academic Press.
 Ewell, J. J. (1988). Space shuttle orbiter entry through land navigation. In IEEE
     International Conference on Intelligent Robots and Systems, pp. 627–632.
 Ljung, L. (1987). System Identification, Theory for the User. Prentice-Hall infor-
     mation and system sciences series. Prentice-Hall.
 Magill, D. D. (1965, October). Optimal adaptive estimation of sampled stochastic
     processes. IEEE transaction on automatic Control 10(4), 434–439.
 Maybeck, P. S. (1979). Stochastic Models, Estimation, and Control, Volume 1.
     Academic Press.
 Silvermann, L., S. Shookoohi, and P. V. Dooren (1983, August). Linear time-
     variable systems: Balancing and model reduction. IEEE transaction on auto-
     matic Control 28(8), 810–822.

Más contenido relacionado

La actualidad más candente

Deep Learning Radial Basis Function Neural Networks Based Automatic Detection...
Deep Learning Radial Basis Function Neural Networks Based Automatic Detection...Deep Learning Radial Basis Function Neural Networks Based Automatic Detection...
Deep Learning Radial Basis Function Neural Networks Based Automatic Detection...Associate Professor in VSB Coimbatore
 
Automatic Detection of Radius of Bone Fracture
Automatic Detection of Radius of Bone FractureAutomatic Detection of Radius of Bone Fracture
Automatic Detection of Radius of Bone FractureIRJET Journal
 
Encrypted sensing of fingerprint image
Encrypted sensing of fingerprint imageEncrypted sensing of fingerprint image
Encrypted sensing of fingerprint imageDhirendraKumar170
 
IRJET-Retina Image Decomposition using Variational Mode Decomposition
IRJET-Retina Image Decomposition using Variational Mode DecompositionIRJET-Retina Image Decomposition using Variational Mode Decomposition
IRJET-Retina Image Decomposition using Variational Mode DecompositionIRJET Journal
 
Human identification using finger images
Human identification using finger imagesHuman identification using finger images
Human identification using finger imageseSAT Publishing House
 
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...Kalle
 
Using Artificial Neural Networks to Detect Multiple Cancers from a Blood Test
Using Artificial Neural Networks to Detect Multiple Cancers from a Blood TestUsing Artificial Neural Networks to Detect Multiple Cancers from a Blood Test
Using Artificial Neural Networks to Detect Multiple Cancers from a Blood TestStevenQu1
 
Depth sensor independent body part localization in depth images using a multi...
Depth sensor independent body part localization in depth images using a multi...Depth sensor independent body part localization in depth images using a multi...
Depth sensor independent body part localization in depth images using a multi...Rasmus Johansson
 
PS1_2014_2012B5A7521P_2012B5A7848P_2012B4A7958H
PS1_2014_2012B5A7521P_2012B5A7848P_2012B4A7958HPS1_2014_2012B5A7521P_2012B5A7848P_2012B4A7958H
PS1_2014_2012B5A7521P_2012B5A7848P_2012B4A7958HSaurabh Kumar
 
Efficient And Improved Video Steganography using DCT and Neural Network
Efficient And Improved Video Steganography using DCT and Neural NetworkEfficient And Improved Video Steganography using DCT and Neural Network
Efficient And Improved Video Steganography using DCT and Neural NetworkIJSRD
 
A Fuzzy Watermarking Approach Based on Human Visual System
A Fuzzy Watermarking Approach Based on Human Visual SystemA Fuzzy Watermarking Approach Based on Human Visual System
A Fuzzy Watermarking Approach Based on Human Visual SystemCSCJournals
 
IRJET- Enhancement of Image using Fuzzy Inference System for Remotely Sen...
IRJET-  	  Enhancement of Image using Fuzzy Inference System for Remotely Sen...IRJET-  	  Enhancement of Image using Fuzzy Inference System for Remotely Sen...
IRJET- Enhancement of Image using Fuzzy Inference System for Remotely Sen...IRJET Journal
 
IRJET- Effectiveness of Lead Point with Microrecording for Determining ST...
IRJET-  	  Effectiveness of Lead Point with Microrecording for Determining ST...IRJET-  	  Effectiveness of Lead Point with Microrecording for Determining ST...
IRJET- Effectiveness of Lead Point with Microrecording for Determining ST...IRJET Journal
 
COMPUTER VISION PERFORMANCE AND IMAGE QUALITY METRICS: A RECIPROCAL RELATION
COMPUTER VISION PERFORMANCE AND IMAGE QUALITY METRICS: A RECIPROCAL RELATION COMPUTER VISION PERFORMANCE AND IMAGE QUALITY METRICS: A RECIPROCAL RELATION
COMPUTER VISION PERFORMANCE AND IMAGE QUALITY METRICS: A RECIPROCAL RELATION csandit
 

La actualidad más candente (20)

Deep Learning Radial Basis Function Neural Networks Based Automatic Detection...
Deep Learning Radial Basis Function Neural Networks Based Automatic Detection...Deep Learning Radial Basis Function Neural Networks Based Automatic Detection...
Deep Learning Radial Basis Function Neural Networks Based Automatic Detection...
 
edc_adaptivity
edc_adaptivityedc_adaptivity
edc_adaptivity
 
Automatic Detection of Radius of Bone Fracture
Automatic Detection of Radius of Bone FractureAutomatic Detection of Radius of Bone Fracture
Automatic Detection of Radius of Bone Fracture
 
Encrypted sensing of fingerprint image
Encrypted sensing of fingerprint imageEncrypted sensing of fingerprint image
Encrypted sensing of fingerprint image
 
IRJET-Retina Image Decomposition using Variational Mode Decomposition
IRJET-Retina Image Decomposition using Variational Mode DecompositionIRJET-Retina Image Decomposition using Variational Mode Decomposition
IRJET-Retina Image Decomposition using Variational Mode Decomposition
 
Human identification using finger images
Human identification using finger imagesHuman identification using finger images
Human identification using finger images
 
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
 
exjobb Telia
exjobb Teliaexjobb Telia
exjobb Telia
 
Using Artificial Neural Networks to Detect Multiple Cancers from a Blood Test
Using Artificial Neural Networks to Detect Multiple Cancers from a Blood TestUsing Artificial Neural Networks to Detect Multiple Cancers from a Blood Test
Using Artificial Neural Networks to Detect Multiple Cancers from a Blood Test
 
Depth sensor independent body part localization in depth images using a multi...
Depth sensor independent body part localization in depth images using a multi...Depth sensor independent body part localization in depth images using a multi...
Depth sensor independent body part localization in depth images using a multi...
 
Aq25256259
Aq25256259Aq25256259
Aq25256259
 
13 pradeep kumar_137-149
13 pradeep kumar_137-14913 pradeep kumar_137-149
13 pradeep kumar_137-149
 
PS1_2014_2012B5A7521P_2012B5A7848P_2012B4A7958H
PS1_2014_2012B5A7521P_2012B5A7848P_2012B4A7958HPS1_2014_2012B5A7521P_2012B5A7848P_2012B4A7958H
PS1_2014_2012B5A7521P_2012B5A7848P_2012B4A7958H
 
Efficient And Improved Video Steganography using DCT and Neural Network
Efficient And Improved Video Steganography using DCT and Neural NetworkEfficient And Improved Video Steganography using DCT and Neural Network
Efficient And Improved Video Steganography using DCT and Neural Network
 
A Fuzzy Watermarking Approach Based on Human Visual System
A Fuzzy Watermarking Approach Based on Human Visual SystemA Fuzzy Watermarking Approach Based on Human Visual System
A Fuzzy Watermarking Approach Based on Human Visual System
 
IRJET- Enhancement of Image using Fuzzy Inference System for Remotely Sen...
IRJET-  	  Enhancement of Image using Fuzzy Inference System for Remotely Sen...IRJET-  	  Enhancement of Image using Fuzzy Inference System for Remotely Sen...
IRJET- Enhancement of Image using Fuzzy Inference System for Remotely Sen...
 
IRJET- Effectiveness of Lead Point with Microrecording for Determining ST...
IRJET-  	  Effectiveness of Lead Point with Microrecording for Determining ST...IRJET-  	  Effectiveness of Lead Point with Microrecording for Determining ST...
IRJET- Effectiveness of Lead Point with Microrecording for Determining ST...
 
project(copy1)
project(copy1)project(copy1)
project(copy1)
 
Sersc 3.org (1)
Sersc 3.org (1)Sersc 3.org (1)
Sersc 3.org (1)
 
COMPUTER VISION PERFORMANCE AND IMAGE QUALITY METRICS: A RECIPROCAL RELATION
COMPUTER VISION PERFORMANCE AND IMAGE QUALITY METRICS: A RECIPROCAL RELATION COMPUTER VISION PERFORMANCE AND IMAGE QUALITY METRICS: A RECIPROCAL RELATION
COMPUTER VISION PERFORMANCE AND IMAGE QUALITY METRICS: A RECIPROCAL RELATION
 

Destacado

My ideal vacations (3)
My ideal vacations (3)My ideal vacations (3)
My ideal vacations (3)AndreaOLopez
 
Google Cradle - Hyper Island
Google Cradle - Hyper IslandGoogle Cradle - Hyper Island
Google Cradle - Hyper IslandIsabella Pipitone
 
แนวทางการจัดค่ายอาเซียน
แนวทางการจัดค่ายอาเซียนแนวทางการจัดค่ายอาเซียน
แนวทางการจัดค่ายอาเซียนJeerapong Saotong
 
Excel Retail & Brand Display Presentation
Excel Retail & Brand Display PresentationExcel Retail & Brand Display Presentation
Excel Retail & Brand Display Presentationallenmoen
 
indicadores de desempeño 3 periodo
indicadores de desempeño 3 periodoindicadores de desempeño 3 periodo
indicadores de desempeño 3 periodoMiguel Villalba
 
violence against transexuals
violence against transexuals violence against transexuals
violence against transexuals Shiva Kumar
 
On site sanitation kawasan bencana
On site sanitation kawasan bencanaOn site sanitation kawasan bencana
On site sanitation kawasan bencanaNadya NP
 
State razvan daniel bun
State razvan daniel bunState razvan daniel bun
State razvan daniel bunVintila Silviu
 
Problem Solving Panel Discussion: Gun violence history, causes and effects
Problem Solving Panel Discussion: Gun violence history, causes and effectsProblem Solving Panel Discussion: Gun violence history, causes and effects
Problem Solving Panel Discussion: Gun violence history, causes and effectsCharles Findley
 
Legal aspect against dowry system
Legal aspect against dowry systemLegal aspect against dowry system
Legal aspect against dowry systemShiva Kumar
 
Problem Solving Panel Discussion-- Effects of Global Warming on Climate Change
Problem Solving Panel Discussion-- Effects of Global Warming on Climate ChangeProblem Solving Panel Discussion-- Effects of Global Warming on Climate Change
Problem Solving Panel Discussion-- Effects of Global Warming on Climate ChangeCharles Findley
 
Value Drivers in Personal Injury by Jeffrey D. Bohn
Value Drivers in Personal Injury by Jeffrey D. BohnValue Drivers in Personal Injury by Jeffrey D. Bohn
Value Drivers in Personal Injury by Jeffrey D. Bohnjdbohnlaw
 
Laporan on the job learning (OJL) 2014 by Kusmiadi, SP
Laporan on the job learning (OJL) 2014 by Kusmiadi, SPLaporan on the job learning (OJL) 2014 by Kusmiadi, SP
Laporan on the job learning (OJL) 2014 by Kusmiadi, SPkusmiadi
 
Fg%20 o%20iinf 2010-220%20estrategias%20de%20gestion%20de%20servicios%20de%20ti
Fg%20 o%20iinf 2010-220%20estrategias%20de%20gestion%20de%20servicios%20de%20tiFg%20 o%20iinf 2010-220%20estrategias%20de%20gestion%20de%20servicios%20de%20ti
Fg%20 o%20iinf 2010-220%20estrategias%20de%20gestion%20de%20servicios%20de%20tiAbraham Masegosa Raña
 

Destacado (19)

Child rights
Child rightsChild rights
Child rights
 
My ideal vacations (3)
My ideal vacations (3)My ideal vacations (3)
My ideal vacations (3)
 
Google Cradle - Hyper Island
Google Cradle - Hyper IslandGoogle Cradle - Hyper Island
Google Cradle - Hyper Island
 
แนวทางการจัดค่ายอาเซียน
แนวทางการจัดค่ายอาเซียนแนวทางการจัดค่ายอาเซียน
แนวทางการจัดค่ายอาเซียน
 
Excel Retail & Brand Display Presentation
Excel Retail & Brand Display PresentationExcel Retail & Brand Display Presentation
Excel Retail & Brand Display Presentation
 
indicadores de desempeño 3 periodo
indicadores de desempeño 3 periodoindicadores de desempeño 3 periodo
indicadores de desempeño 3 periodo
 
swot analysis
swot analysisswot analysis
swot analysis
 
Java script core
Java script coreJava script core
Java script core
 
график жизни
график жизниграфик жизни
график жизни
 
violence against transexuals
violence against transexuals violence against transexuals
violence against transexuals
 
On site sanitation kawasan bencana
On site sanitation kawasan bencanaOn site sanitation kawasan bencana
On site sanitation kawasan bencana
 
State razvan daniel bun
State razvan daniel bunState razvan daniel bun
State razvan daniel bun
 
Esei 1
Esei 1Esei 1
Esei 1
 
Problem Solving Panel Discussion: Gun violence history, causes and effects
Problem Solving Panel Discussion: Gun violence history, causes and effectsProblem Solving Panel Discussion: Gun violence history, causes and effects
Problem Solving Panel Discussion: Gun violence history, causes and effects
 
Legal aspect against dowry system
Legal aspect against dowry systemLegal aspect against dowry system
Legal aspect against dowry system
 
Problem Solving Panel Discussion-- Effects of Global Warming on Climate Change
Problem Solving Panel Discussion-- Effects of Global Warming on Climate ChangeProblem Solving Panel Discussion-- Effects of Global Warming on Climate Change
Problem Solving Panel Discussion-- Effects of Global Warming on Climate Change
 
Value Drivers in Personal Injury by Jeffrey D. Bohn
Value Drivers in Personal Injury by Jeffrey D. BohnValue Drivers in Personal Injury by Jeffrey D. Bohn
Value Drivers in Personal Injury by Jeffrey D. Bohn
 
Laporan on the job learning (OJL) 2014 by Kusmiadi, SP
Laporan on the job learning (OJL) 2014 by Kusmiadi, SPLaporan on the job learning (OJL) 2014 by Kusmiadi, SP
Laporan on the job learning (OJL) 2014 by Kusmiadi, SP
 
Fg%20 o%20iinf 2010-220%20estrategias%20de%20gestion%20de%20servicios%20de%20ti
Fg%20 o%20iinf 2010-220%20estrategias%20de%20gestion%20de%20servicios%20de%20tiFg%20 o%20iinf 2010-220%20estrategias%20de%20gestion%20de%20servicios%20de%20ti
Fg%20 o%20iinf 2010-220%20estrategias%20de%20gestion%20de%20servicios%20de%20ti
 

Similar a Sensfusion

Deep Learning for Health Informatics
Deep Learning for Health InformaticsDeep Learning for Health Informatics
Deep Learning for Health InformaticsJason J Pulikkottil
 
A Seminar Report On NEURAL NETWORK
A Seminar Report On NEURAL NETWORKA Seminar Report On NEURAL NETWORK
A Seminar Report On NEURAL NETWORKSara Parker
 
Masters' Thesis - Reza Pourramezan - 2017
Masters' Thesis - Reza Pourramezan - 2017Masters' Thesis - Reza Pourramezan - 2017
Masters' Thesis - Reza Pourramezan - 2017Reza Pourramezan
 
Neural Networks on Steroids
Neural Networks on SteroidsNeural Networks on Steroids
Neural Networks on SteroidsAdam Blevins
 
Data Analysis In The Cloud
Data Analysis In The CloudData Analysis In The Cloud
Data Analysis In The CloudMonica Carter
 
Routing Protocols for Wireless Sensor Networks
Routing Protocols for Wireless Sensor NetworksRouting Protocols for Wireless Sensor Networks
Routing Protocols for Wireless Sensor NetworksDarpan Dekivadiya
 
Predictiveanalysisonactivityrecognitionsystem 190131212500
Predictiveanalysisonactivityrecognitionsystem 190131212500Predictiveanalysisonactivityrecognitionsystem 190131212500
Predictiveanalysisonactivityrecognitionsystem 190131212500Sragvi Anirudh
 
Predictive analysis on Activity Recognition System
Predictive analysis on Activity Recognition SystemPredictive analysis on Activity Recognition System
Predictive analysis on Activity Recognition SystemShankarPrasaadRajama
 
The application of process mining in a simulated smart environment to derive ...
The application of process mining in a simulated smart environment to derive ...The application of process mining in a simulated smart environment to derive ...
The application of process mining in a simulated smart environment to derive ...freedomotic
 
Real Time Condition Monitoring of Power Plant through DNP3 Protocol
Real Time Condition Monitoring of Power Plant through DNP3 ProtocolReal Time Condition Monitoring of Power Plant through DNP3 Protocol
Real Time Condition Monitoring of Power Plant through DNP3 ProtocolIJRES Journal
 
MSc_thesis_OlegZero
MSc_thesis_OlegZeroMSc_thesis_OlegZero
MSc_thesis_OlegZeroOleg Żero
 
Caravan insurance data mining prediction models
Caravan insurance data mining prediction modelsCaravan insurance data mining prediction models
Caravan insurance data mining prediction modelsMuthu Kumaar Thangavelu
 
Caravan insurance data mining prediction models
Caravan insurance data mining prediction modelsCaravan insurance data mining prediction models
Caravan insurance data mining prediction modelsMuthu Kumaar Thangavelu
 
Enterprise Data Center Networking (with citations)
Enterprise Data Center Networking (with citations)Enterprise Data Center Networking (with citations)
Enterprise Data Center Networking (with citations)Jonathan Williams
 

Similar a Sensfusion (20)

Deep Learning for Health Informatics
Deep Learning for Health InformaticsDeep Learning for Health Informatics
Deep Learning for Health Informatics
 
A Seminar Report On NEURAL NETWORK
A Seminar Report On NEURAL NETWORKA Seminar Report On NEURAL NETWORK
A Seminar Report On NEURAL NETWORK
 
Diplomarbeit
DiplomarbeitDiplomarbeit
Diplomarbeit
 
Masters' Thesis - Reza Pourramezan - 2017
Masters' Thesis - Reza Pourramezan - 2017Masters' Thesis - Reza Pourramezan - 2017
Masters' Thesis - Reza Pourramezan - 2017
 
MASci
MASciMASci
MASci
 
PFC
PFCPFC
PFC
 
Neural Networks on Steroids
Neural Networks on SteroidsNeural Networks on Steroids
Neural Networks on Steroids
 
Data Analysis In The Cloud
Data Analysis In The CloudData Analysis In The Cloud
Data Analysis In The Cloud
 
Thesis_Nazarova_Final(1)
Thesis_Nazarova_Final(1)Thesis_Nazarova_Final(1)
Thesis_Nazarova_Final(1)
 
Routing Protocols for Wireless Sensor Networks
Routing Protocols for Wireless Sensor NetworksRouting Protocols for Wireless Sensor Networks
Routing Protocols for Wireless Sensor Networks
 
Predictiveanalysisonactivityrecognitionsystem 190131212500
Predictiveanalysisonactivityrecognitionsystem 190131212500Predictiveanalysisonactivityrecognitionsystem 190131212500
Predictiveanalysisonactivityrecognitionsystem 190131212500
 
Predictive analysis on Activity Recognition System
Predictive analysis on Activity Recognition SystemPredictive analysis on Activity Recognition System
Predictive analysis on Activity Recognition System
 
The application of process mining in a simulated smart environment to derive ...
The application of process mining in a simulated smart environment to derive ...The application of process mining in a simulated smart environment to derive ...
The application of process mining in a simulated smart environment to derive ...
 
Real Time Condition Monitoring of Power Plant through DNP3 Protocol
Real Time Condition Monitoring of Power Plant through DNP3 ProtocolReal Time Condition Monitoring of Power Plant through DNP3 Protocol
Real Time Condition Monitoring of Power Plant through DNP3 Protocol
 
main
mainmain
main
 
MSc_thesis_OlegZero
MSc_thesis_OlegZeroMSc_thesis_OlegZero
MSc_thesis_OlegZero
 
MSc_Thesis
MSc_ThesisMSc_Thesis
MSc_Thesis
 
Caravan insurance data mining prediction models
Caravan insurance data mining prediction modelsCaravan insurance data mining prediction models
Caravan insurance data mining prediction models
 
Caravan insurance data mining prediction models
Caravan insurance data mining prediction modelsCaravan insurance data mining prediction models
Caravan insurance data mining prediction models
 
Enterprise Data Center Networking (with citations)
Enterprise Data Center Networking (with citations)Enterprise Data Center Networking (with citations)
Enterprise Data Center Networking (with citations)
 

Último

What is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERPWhat is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERPCeline George
 
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITY
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITYISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITY
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITYKayeClaireEstoconing
 
Q4 English4 Week3 PPT Melcnmg-based.pptx
Q4 English4 Week3 PPT Melcnmg-based.pptxQ4 English4 Week3 PPT Melcnmg-based.pptx
Q4 English4 Week3 PPT Melcnmg-based.pptxnelietumpap1
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...Jisc
 
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17Celine George
 
GRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTS
GRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTSGRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTS
GRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTSJoshuaGantuangco2
 
ENGLISH6-Q4-W3.pptxqurter our high choom
ENGLISH6-Q4-W3.pptxqurter our high choomENGLISH6-Q4-W3.pptxqurter our high choom
ENGLISH6-Q4-W3.pptxqurter our high choomnelietumpap1
 
Keynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designKeynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designMIPLM
 
ANG SEKTOR NG agrikultura.pptx QUARTER 4
ANG SEKTOR NG agrikultura.pptx QUARTER 4ANG SEKTOR NG agrikultura.pptx QUARTER 4
ANG SEKTOR NG agrikultura.pptx QUARTER 4MiaBumagat1
 
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdfAMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdfphamnguyenenglishnb
 
Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)Mark Reed
 
How to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPHow to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPCeline George
 
Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17Celine George
 

Último (20)

What is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERPWhat is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERP
 
YOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptx
YOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptxYOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptx
YOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptx
 
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITY
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITYISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITY
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITY
 
Q4 English4 Week3 PPT Melcnmg-based.pptx
Q4 English4 Week3 PPT Melcnmg-based.pptxQ4 English4 Week3 PPT Melcnmg-based.pptx
Q4 English4 Week3 PPT Melcnmg-based.pptx
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
 
Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...
 
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
 
GRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTS
GRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTSGRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTS
GRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTS
 
OS-operating systems- ch04 (Threads) ...
OS-operating systems- ch04 (Threads) ...OS-operating systems- ch04 (Threads) ...
OS-operating systems- ch04 (Threads) ...
 
ENGLISH6-Q4-W3.pptxqurter our high choom
ENGLISH6-Q4-W3.pptxqurter our high choomENGLISH6-Q4-W3.pptxqurter our high choom
ENGLISH6-Q4-W3.pptxqurter our high choom
 
Keynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designKeynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-design
 
ANG SEKTOR NG agrikultura.pptx QUARTER 4
ANG SEKTOR NG agrikultura.pptx QUARTER 4ANG SEKTOR NG agrikultura.pptx QUARTER 4
ANG SEKTOR NG agrikultura.pptx QUARTER 4
 
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdfAMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
 
Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)
 
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
 
FINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptx
FINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptxFINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptx
FINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptx
 
YOUVE_GOT_EMAIL_PRELIMS_EL_DORADO_2024.pptx
YOUVE_GOT_EMAIL_PRELIMS_EL_DORADO_2024.pptxYOUVE_GOT_EMAIL_PRELIMS_EL_DORADO_2024.pptx
YOUVE_GOT_EMAIL_PRELIMS_EL_DORADO_2024.pptx
 
How to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPHow to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERP
 
Raw materials used in Herbal Cosmetics.pptx
Raw materials used in Herbal Cosmetics.pptxRaw materials used in Herbal Cosmetics.pptx
Raw materials used in Herbal Cosmetics.pptx
 
Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17
 

Sensfusion

  • 1. Thomas Bak Lecture Notes - Estimation and Sensor Information Fusion November 14, 2000 Aalborg University Department of Control Engineering Fredrik Bajers Vej 7C DK-9220 Aalborg Denmark
  • 2. 2 Table of Contents Table of Contents 1. Introduction ¢¢¢¢¢¡£¡¢¡£¡¢¢¢¡£¡¢¡£¡¢¢¢¡£¡¢¡£¡¢¢¢¡£¡¢¡£¡¢¢¢¡¡                                                                                                3 2. Sensors ¢¢¢¢¢¡£¡¢¡£¡¢¢¢¡£¡¢¡£¡¢¢¢¡£¡¢¡£¡¢¢¢¡£¡¢¡£¡¢¢¢¡£¡¢£¡                                                                                                        4 2.1 Sensor Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 Multiple Sensor Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 3. Fusion ¢¢¢¢¢¡£¡¢¡£¡¢¢¢¡£¡¢¡£¡¢¢¢¡£¡¢¡£¡¢¢¢¡£¡¢¡£¡¢¢¢¡£¡¢£¡¢                                                                                                          6 3.1 Levels of Sensor Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3.2 An I/O-Based Characterization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.3 Central versus Decentral Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 4. Multiple Model Estimation ¢¢¢¢¢¡£¡¢¡£¡¢¢¢¡£¡¢¡£¡¢¢¢¡£¡¢¡£¡¢¢                                                                      10 4.1 Benefits of Multiple Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 4.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 4.3 Methods of Multi-Model Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 4.3.1 Model Switching Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 4.3.2 The Model Detection Method . . . . . . . . . . . . . . . . . . . . . . . . . . 14 4.3.3 Multiple Hypothesis Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 4.3.4 Model Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 4.3.5 Combined System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
  • 3. 1. Introduction 3 1. Introduction This note discusses multi-sensor fusion. Through sensor fusion we may combine readings from different sensors, remove inconsistencies and combine the informa- tion into one coherent structure. This kind of processing is a fundamental feature of all animal and human navigation, where multiple information sources such as vision, hearing and balance are combined to determine position and plan a path to a goal. While the concept of data fusion is not new, the emergence of new sensors, ad- vanced processing techniques, and improved processing hardware make real-time fusion of data increasingly possible. Despite advances in electronic components, however, developing data processing applications such as automatic guidance sys- tems has proved difficult. Systems that are in direct contact and interact with the real world, require reliable and accurate information about their environment. This in- formation is acquired using sensors, that is devices that collect data about the world around them. The ability of one isolated device to provide accurate reliable data of its envi- ronment is extremely limited as the environment is usually not very well defined in addition to sensors generally not being a very reliable interface. Sensor fusion seeks to overcome the drawbacks of current sensor technology by combining information from many independent sources of limited accuracy and re- liability to give information of better accuracy and reliability. This makes the system less vulnerable to failures of a single component and generally provide more accu- rate information. In addition several readings from the same sensor are combined, making the system less sensitive to noise and anomalous observations.
  • 4. 4 2. Sensors 2. Sensors 2.1 Sensor Classification Sensors may be classified into two groups: active or passive. Active sensors sends a signal to the environment and measures the interaction between this signal and the environment. Examples include sonar and radar. By contrast, passive sensors simply record signals already present in the environment. Passive sensors include most thermometers and cameras. Sensors may also be classified according to the medium used for measuring the environment. This type of classification is related to the senses with which humans are naturally endowed. Electromagnetic sensors (Human vision). 1. Charged Coupled Devices (CCD) 2. Radio Detection and Ranging (RADAR) 3. ect. Sound wave sensors (Human hearing). 1. Sound Navigation and Ranging (SONAR) 2. ect. Odor sensors (Human smell). Touch sensors. 1. Bumpers 2. Light Emitting Diodes (LED) 3. ect. Proprioceptive sensors (Human balance). The senses that tell the body of its posi- tion and movement are called proprioceptive senses. Examples: 1. Gyroscopes 2. Compass 3. GPS 4. ect.
  • 5. 2.2 Multiple Sensor Networks 5 2.2 Multiple Sensor Networks Multiple sensor networks may be classified by how the sensors in the network inter- act. Three classes are defined: Complementary. Sensors are complementary when they do not depend on each other directly, but can be combined to give a more complete image of the en- vironment. Complementary data can often be fused by simply extending the limits of the sensors. Competitive. Sensors are competitive when they provide independent measure- ments of the same information. They provide increased reliability and accuracy. Because competitive sensors are redundant, inconsistencies may arise between sensor readings, and care must be taken to combine the data in away that re- moves the uncertainties. When done properly, this kind of data fusion increases the robustness of the system. Cooperative. Sensors are cooperative when they provide independent measure- ments, that when combined provide information that would not be available from any one sensor. Cooperative sensor networks take data from simple sen- sors and construct a new abstract sensor with data that does not resemble the readings from any one sensor. The major factors dictating the selection of sensors for fusion are: the compati- bility of the sensors for deployment within the same environment, and the comple- mentary nature of the information derived from the sensors. If the sensors were to merely duplicate the information, the the fusion process would merely be the equiv- alent of building in redundancy for the enhancement of reliability of the overall system. On the other hand, the sensors should be complementary enough in terms of such variables as data rates, field of view, range, sensitivity to make fusion of information meaningful.
  • 6. 6 3. Fusion 3. Fusion 3.1 Levels of Sensor Fusion Observational data may be combined, or fused, at a variety of levels from the raw data level to a state vector (feature) level, or at the decision level. A common dif- ferentiation is among high-level fusion, which fuses decisions, mid-level fusion, which fuses parameters concerning features sensed locally, and low-level fusion, which fuses the raw data from sensors. The higher the level of fusion, the smaller the amount of information that must be processes. This hierarchy can be further detailed by classification according to input and output data types, as seen in Figure 3.1. Processing Processing Data Features Decision Input Input ¤ Fusion ¤ Output Output Data Features Decision Processing Processing Fig. 3.1. Levels of sensor fusion. The fundamental levels of data-feature-decision are broken down into a mode classification based on input-output relationship. An additional dimension to the fusion process, is temporal fusion, that is fusion of data or information acquired over a period of time. The can occur on all levels listed above and hence is viewed as a part of the the categorization above. The human brain is working on all levels in the categorization of Figure 3.1 as well as in temporal fusion. An important characteristic of the brain is the decision fusion because of the ability to take a global perspective. The machine is most ef- ficient relative to humans at the data level because of its ability to process large amounts of raw data in a short period of time.
  • 7. 3.2 An I/O-Based Characterization 7 3.2 An I/O-Based Characterization By characterizing the three fundamental levels in Figure 3.1, data-feature-decision by their input output relationship we get a number of input-output modes: Data In-Data Out (DIDO). This is the most elementary for of fusion. Fusion paradigms in this category are generally based on techniques developed in the traditional signal processing domain. For example sensors measuring the same physical phenomena, such as two competitive sensors may be combined to iden- tify obstacles at each side of a robot. Data In-Feature Out (DIFO). Here, data from different sensors are combined to extract some form of feature of the environment or a descriptor of the phe- nomenon under observation. Depth perception in humans, by combining the visual information from two eyes, can be seen as a classical example of this level of fusion. Feature In-Feature Out (FIFO). In this stage of the hierarchy, both input and out- put are features. For example shape features from and imaging sensor may be combined with range information from a radar to provide a measure of the vol- umetric size of the object being studied. Feature In-Decision Out (FIDO). Here, the inputs are features and the out of the fusion process is a decision such as a target class recognition. Most pattern recognition systems perform this kind of fusion. Feature vectors are classified based on a priori information to arrive at a class or decision about the pattern. Decision In-Decision Out (DIDO). This is the last step in the hierarchy. Fusion at this level imply that decision are derived at a lower level, which in turn im- ply that the fusion process at previously discussed levels have also been per- formed. Examples of decision level fusion involves weighted methods (voting techniques) and inference. In practical problems, it is likely that fusion in many of the modes defined above will be incorporated into the design to achieve optimal performance. The process of extracting relevant information from the data, in terms of features and decisions, on the one hand may be throwing away information, but on the other hand may also be reducing the noise that degrade the quality of the ensuing decision. 3.3 Central versus Decentral Fusion Centralized architectures assume that a single processor performs the entire data fusion process. With the growing complexity of systems, centralized fusion is be- coming less attractive due to the high communication load and reliability concerns. An alternative is to perform a local estimation of states or parameters from avail- able data followed by a fusion with other similar features to form a global estimate. When done right, this assures a graceful degradation of the system as nodes fail. A fully is comprised of a number of nodes. Each nodes process sensor data locally, validates them, and uses the observations to update its state estimate via
  • 8. 8 3. Fusion Sensor A Feature extraction Sensor Decision B Data level fusion Sensor N (a) Sensor ¥ Extract A Features Sensor Extract Decision B Features Feature level ¦ fusion Sensor Extract N Features (b) Fig. 3.2. Alternate architectures for multi-sensor identity fusion. (a) Data level fusion, (b) Feature level fusion.
  • 9. 3.3 Central versus Decentral Fusion 9 standard Kalman filtering. The nodes then communicates their results in terms of state and possibly parameter estimates to other similar nodes that are then used locally in the other nodes to update their local estimate in a second update. The estimate available is now a global estimate and it may be verified to be the same as would be achieved using a centralized architecture. The decentralized architecture is more resistant to node failures. ^ ^ ,P xg g x1, P1 Sensor © Data Local Global 1 § validation ¨ estimator update ^ x2, P2 ^ ,P xg g Sensor Data © Local Global 2 validation ¨ estimator update Communication of results between nodes ^ xn, Pn ^ ,P xg g Sensor Data © Local Global N § validation ¨ estimator update Fig. 3.3. Decentralized fusion of data. Local estimates of state and parameter is updated locally by data from other fusion nodes achieving a global estimate.
  • 10. 10 4. Multiple Model Estimation 4. Multiple Model Estimation This chapter extends the consideration of estimation theory to consider the use of multiple process models. Multiple process models offer a number of important ad- vantages over single model estimators. Multiple models allow a modular approach to be taken. Rather than develop a single model which must be accurate for all possible types of behaviour by the true system, a number of different models can be derived. Each model has its own properties and, with an appropriate choice of a data fusion algorithm, it is possible to span a larger model space. Four different strategies for multiple model estimation are examined in this chapter – multiple model switching – multiple model detection – multiple hypothesis testing – multiple model fusion Although the details of each scheme is different, the three first all use funda- mentally the same approach. The designer specifies a set of models. At any given time only one of these models is correct and all the other models are incorrect. The different strategies use different types of test to identify which model is correct and, once this has been achieved, the information from all the other models is neglected. The latter strategy, model fusion utilize that process models are a source of in- formation and their predictions can be viewed as the measurement from a virtual sensor. Therefore, multiple model fusion can be cast in the same framework as multisensor fusion and a Kalman update rule can be used to consistently fuse the predictions of multiple process models together. This strategy has many important benefits over the other approaches to multiple model management. It includes the ability to exploit information about the differences in behaviour of each model. As a result, the fusion is synergistic: the performance of the fused system can be better than that of any individual model. 4.1 Benefits of Multiple Models Increasing the complexity of a process model does not necessarily lead to a better estimator. As the process model becomes more complicated, it makes more and
  • 11. 4.1 Benefits of Multiple Models 11 more assumptions about the behaviour of the true system. These are embodied as an increasing number of constraints on situations in which the model fits the real world. When the behaviour of the true system is consistent with the assumptions in the model, its performance is very good. However, when the behaviour is not consistent, performance can be significantly worse. Simply increasing the order of the model without regard to this phenomena could lead to a very complicated, high order model which works extremely well in very limited circumstances. In most other situations, its performance could be poor. Multimodel estimation techniques are a method which addresses this underlying problem. Rather than seek a single model which is sufficiently complicated and flexible that it is consistent with all possible types of behaviour for the true system, a number of low order approximate systems are developed. Each approximate system is developed with a different assumptions about the behaviour of the true system. Since these assumptions are different, the situations in which each model fits is different. By ensuring a suitable complement over the range of operation of the system, good performance can be achieved under all conditions. Although the accuracy of a single model in a specific circumstance is lost, it is balanced by the range of situations over which performance is good. The models are combined so that the strengths of one covers for the weaknesses of the others. Figure 4.1 provides a rep- resentation of this. State space of Applicability of interest model 3 Applicability of model 1 Applicability of model 2 Fig. 4.1. The use of multiple models. Three different models have been derived with different but overlapping domains. Although a single model is not capable of describing the entire behaviour of the system, a suitable combination of all of the models can.
  • 12. 12 4. Multiple Model Estimation 4.2 Problem Formulation Suppose a set of process models have been derived. Each uses a different set of assumptions about the true system, and the result is a set of approximate systems. The i’th approximate system is described using the process and observation models 76RQPFGF7CA9764210)($ # 5 3 ! I H # E D B@ 8 # 5 3 ! ' % # ! (4.1) $24`X¡CA9$AWV($ATS # ! Y H # B@ 8 # ! U % # ! (4.2) The state space of the approximate system is related to the true system state, a $ , according to the structural equations # B A9$ a 0 cW @ 8 # ! b % # ! 1 $! F2 d # e ! The problem is to find the estimate of the variables of interest with covariance $! F ihf # e ! g g which takes account of all of the different approximate models such that the trace of W! F2 ihf # e ! g g is minimized. The strategy is not limited to just combining the estimates of the variables in an output stage, the estimate may also be feed back to each of the approximate systems, directly affecting their estimates. This is reflected in the model management strategy which may be summarized in terms of two sets of functions – the critical value output function u$2 ¢d 8 sG  9$¢E d rq$ d ## ! t     8 # ! p % # ! (4.3) which combine the multiple model estimates and the approximate system update function W % # ! u$2 t 8 GG  8 $2 E 0 ## !     # ! (4.4) that updates the ’th model state. A successful multimodel management strategy v should have the following properties: Consistency. The estimate should always be consistent2 . The estimates become in- consistent when the discrepancy between the estimated covariance and the true covariance is not positive semidefinite. Performance. There should be a performance advantage over using a single pro- cess model. This justifies the extra complexity of using multimodels instead of a single model. These benefits include more accurate estimates and more robust solutions. Theory. The method should have a firm theoretical foundation. Not only does this mean that the range of applicability is known, but it is also possible to generalize the technique to a wide range of applications. w † The whole state space may not be of interest, and „€ WWx ƒ ‚ €y may thus be a subset of „€ WQ… ƒ ‚ €y That is rj¡f A¡‘ u‰ ˜ —G¡‘ ud™—)”Q¡‘ uˆ‡ where k i h g ‚ ƒ ‚ e –… ƒ ‚ ‰ ˜ –… • “ ’ ƒ ‚ ‰ y are state errors, and –… g fare measure- ments up to time g ml
  • 13. 4.3 Methods of Multi-Model Estimation 13 4.3 Methods of Multi-Model Estimation The principle and practice of multiple model estimation has been used extensively for many years. This section reviews various multimodel schemes. 4.3.1 Model Switching Methods The simplest approach for estimation with multiple models is model switching. It is simple to understand, computationally efficient, and readily facilitates the use of many different models. The essential idea is shown in Figure 4.2. Estimate 1 Sensors Filter 1 o Output estimate Estimate 2 Select Filter 2 n estimate Estimate n Filter n Fig. 4.2. The model switching method to estimation with multiple models. The output from the bank of filters is that of the filter which uses an appropriate model for the given situation. Given that filters are implemented, one for each process model. All the fil- ters operate completely independently and in parallel with one another. They make predictions, perform updates, and generate the estimates of the critical values of the system. Combining the information from the different models is very simple. It is B@ assumed that, at any time , only one model is appropriate. The output from the bank of filters is that of the filter which uses the appropriate model. The problem is to identify the appropriate model. The output model management function is V$ E d W E squ$2 t d 8 sG  9$ E d pp H # ! # ! r % ## !     8 # ! $ t d $ t r sG  # ! # !     (4.5) where t 5 if model i is selected 8 W r % # ! (4.6) u if model i is not selected. Since only one model is chosen at any time $21sr # ! sums to unity. The filters com- bine information only for output purposes and their respective state spaces are un- changed. The key design issue with this approach is to choose the switching logic which determines the values of $2 r # ! which is dependent on the specific application. The
  • 14. 14 4. Multiple Model Estimation landbased navigation of the Space Shuttle (Ewell 1988), for example, uses two dif- ferent process models which correspond to accelerating and cruising flight. The accelerating model is used when the acceleration exceeds a predefined limit. When acceleration is below some critical level, the shuttle is assumed to be in cruise mode. In these cases, complex gravitational and atmospheric models are incorporated into the equations of motion. The threshold was selected using extensive empirical tests. Switching on the basis of measurements (or estimates) may lead to jitter if the switching criterion is noisy and its mean value is near the switching threshold. A more refined solution is to employ hysteresis switching which switches only when there is a significant difference between the performance of the different models. Although the model switching method and its variants are simple to understand and use, there are a significant number of problems. The most important is that the choice of the switching logic is often arbitrary, and has to be selected empirically – there is no theoretical justification that the switching logic is correct. 4.3.2 The Model Detection Method The model detection approach is a more sophisticated version of the model switch- ing method. Rather than switch on the basis of various ad hoc thresholds, it attempts to detect which model is the least complex model (parsimonious ((Ljung 1987))), which is capable of describing the most significant features of the true system. The issue is a bias/variance tradeoff. As a model becomes more complex, it becomes a better description of the true system, and the bias becomes progressively smaller. However, a more complex model includes more states. Given that there is only a fi- nite amount of noisy data, the result is that the information has to be spread between the states and the variance on the estimates of all of the states increase. For example, if a vehicle is driving in a straight line a very simple model may be used. However, when the vehicle turn, the effects of dynamics and slip may have to be included. By ensuring that the least complex model is used at any given point in time, the variance component of the bias/variance tradeoff is minimized so that the model is not overly complicated for the maneuvers in question. The output model management function is of the same form as Equations (4.5) and (4.6). No information is propagated from one model to the next and the approx- imate system update is hence unity. One method for testing whether a model is parsimonious or not is to use model reduction techniques. These examine how a high order model can be approximated by a low order model. A crucial component for any model reduction scheme is the ability to assess the relative importance that different states have on the behaviour of the system. If a number of states have very little effect on output, then they can be deleted without introducing substantial errors. To make the contributions from different modes clear, we turn to balanced re- alizations (Silvermann, Shookoohi, and Dooren 1983). In balanced realizations a linear transformation is applied to the state space of the system. In this form, the contribution of different modes is made clear, and decision rules can be easily for- mulated.
  • 15. 4.3 Methods of Multi-Model Estimation 15 To calculate the balanced realizations for the linear system, $)~W|{XWVu$yx7svX # ! }# ! z H # ! # ! w % # 5 H ! (4.7) $2Vu$Px$™S # ! # ! € % # ! (4.8) its observability, , and controllability, ‚ ` , Grammians are evaluated across a ‚ „ƒ window of length according to the equations … B ˆ ‡ # … †! Q ` 3 8 ! ‚ CW! ¡63 v €# 8 5 # v Pu# v € $! ¢3 v # 8 5 (4.9) WXŒ™7Vd Ž  E ‹Š D B ‰ Ž  B ‡ # … †! Q „ƒ 3 8 ! ‚ 8™! ˆ u# v |u763 v z z# 5 #v T3 v ™2 # 5 8 ! (4.10) WXŒ™7Vd  E ‹Š D B ‰ Ž Ž  where 18 v # pis the state transition matrix from discrete step m to i.  $2 ‘ # ! W 4’ # ! “ The Grammians are diagonalized by finding the matrices and such that E ”D W! 8 … † ‚ ƒ W! 8 … R ‚  % # 3 ! # 3 ! ‘ $ “ ’ $2 ‘ # ! # ! $2 # ! (4.11) If the linear transformation $2 ‘ # ! is applied to the state vector, the calculated Gram- $ ’ # ! mians are diagonal and both equal to which is the matrix of singular values. This matrix is diagonal: $ E # ! u u › œš sG      › u $2 # ! u ›  sG    —–$ ’ • % # ! “ (4.12) — . . . . .. . . ™ u—˜ . . . .  u $2 t # ! sG  ™     sG      The singular values reflect the significance that each state has for the behaviour of the system (Silvermann, Shookoohi, and Dooren 1983). Both the observability ™ and controllability of the state are included in the measure. If a state is almost un- observable, it has little effect on the measured output of the system. If it is almost uncontrollable, the control inputs can exert little influence on its final value. If it is both unobservable and uncontrollable, its value is not affected by the control inputs and it does not change the output of the system. Example 4.3.1. A vehicle dynamics model may include tire stiffness parameters in the state vector in order to model the a number of physical parameters in the tire that change over time. When the vehicle drives in a straight line the forces and slip angles are small, rendering the tire stiffness parameters unobservable. In this situation, the filter should switch to a lower order model without tire modeling. As the method above is only used for the switching rule, linearization is ac- ceptable for nonlinear models, but the overhead associated with this method is still significant. Moreover it may be difficult to evaluate the results. The two approaches which have been described so far use deterministic tests to identify which model is appropriate. The alternative is to consider probabilistic methods which are described next.
  • 16. 16 4. Multiple Model Estimation 4.3.3 Multiple Hypothesis Testing Multiple Hypothesis Testing (Magill 1965) (MHT) is one of the most widely used approaches to multimodel estimation. Rather than using bounds to define regions of applicability, each model is considered to be a candidate for the true model. For each model, the hypothesis that it is the true model is raised, and the probability that the hypothesis is correct is evaluated using the observation sequence. Over time, the most accurate model is assigned the largest probability and so dominates the behaviour of the estimator. Figure 4.3 shows the structure of this method. Each model is implemented in its own filter and the only interaction occurs in forming the output. The output is a function of the estimates made by the different models as well as the probability that each model is correct. The fused estimate is not used to refine the estimates maintained in each model and so the approximate system update functions are unity. Sensors Filter 1 ž   Output estimate Combine Filter 2 Ÿ estimates Filter n Probablity ¡ calculation Fig. 4.3. The model hypothesis method. The output is a function of the estimates made by the different models as well as the probability that each model is correct. The following two assumptions are made: 1. The true model is one of those proposed. % @ u 2. The same model has been in action since . From these two assumptions, a set of hypotheses are formed, one for each model. The hypothesis for the ’th model is v V¢€ % Model  is correct   (4.13) By assumption 1 there is no null hypothesis because the true model is one of models in the set. The probability that model is correct at time , conditioned on the  B@ measurements up to that time, B £ is
  • 17. 4.3 Methods of Multi-Model Estimation 17 # B £ e  XP$ ¤ ¥ ‡ # ! (4.14) The initial probability that is correct is given by ¦ r , which according to # u ¦ ¤ assumption 1 sum to unity over all models. The probability assigned to each model changes through time as more informa- tion becomes available. Each filter predicts and updates according to the Kalman filter equations. Consider the evolution of from ¦ ¤ †! 5 3 to k. Using Bayes’ rule # B £ e  X`§$ ¤ ¥ ‡ # ! # FD7B £ 8 #$2S e  X`% E ! ¥ #sE”D7B £ e ¦  X#2  87EF7B £ G$2X¥ ¥ D e # ! S % # E”DTB £ eG$!2X¥ # S 76† ~#  8 ”7B £ s$™0h¥ # 5 3 ! ¤ E D e # ! S % ¨ (4.15) #GEF7B £ e ¦  h# ¦  7”TB £ G$2X¥ h‰ t D ¥ 8 E D e # ! S E ¦ The #  8 F7B £ G$2X¥ E D e # ! S #W!™S is the probability that the observation would be made given that the model is valid. This probability may be calculated directly from ¦r # T53R! eF!ª«3)#$!™S(q$2 9© the innovations S ‡ # ! ¦ . Assuming that the innovation is Gaus- #7563†! eV ¬ ! sian, zero mean and has covariance the likelihood is 5 E 176R! F21 $ ­ % # ! (4.16) ± ## 5 3 e ! ¬ ²CV¡ ± °# ¯ ® œ„™—i´ º¹ ¸ · ¶ “ G—F“ µ ´ ³ E FD 5 ¾u#$! © 76R! F2 ¬ $2 © ¢½% # 5 3 e ! # ! 3 ¼ (4.17) Œ»¸ º¹ Ž ® p where is the dimension of the innovation vector. The innovation covariance $ ¬ # ! is maintained by the Kalman filter and given by   $ PVW u$ f $2 x764! F2 ¬ # ! ¿ H # ! €# ! # ! € % # 5 3 e ! (4.18) Ž WA ­ # ! The MHT now works by recursion, first calculate for each model. The new probabilities are now given by TR21FC$A ­ # 5 3 ! ¤# ! q$1F¤ % # ! ¨ (4.19) #756† ¦ C$ ¦ ­ X‰ t 3 ! ¤# ! E ¦ The minimum mean squared error estimate is the expected value. The output func- tion, i.e. the output of the multiple model is thus given by u$ t d 8 Gs  8 W E d cqW! F2 ª d Á# !     # ! À p % # e ! (4.20) Á B £ s$ d À s% e # !  (4.21) t ˆ Å~#¡B £ A  h$21 d e ¥# ! Äs% à  (4.22) Xd E ‰ t ˆ $2 ¤ ª d # ! % (4.23) Xd E ‰
  • 18. 18 4. Multiple Model Estimation The covariance in the estimate can be found to (Bar-Shalom and Fortmann 1988) t ˆ $! V ihf % # e ! g g u$! F Çh$! F TiuW! F2 «)$! F21 TsFW! F2 ig 0 f À $21F¤ # # e ! ªd 3 # e ! ªd # # e ! ªd 3 # e ! ªd H # e ! g Æ # ! Á Xd E ‰ Ž (4.24) which is a sum of probability weighted terms from each model. In addition to the direct covariance from the models, a term is included that takes account for the fact ªd that is not equal to . d The MHT approach was developed initially for problems related to system iden- tification – the dynamics of a nominal plant were specified, but there was some uncertainty in the values of several key parameters. A bank of filters were imple- mented, one for each filter candidate parameter value. Over time, one model ac- quires a significantly greater probability than the other models, and it corresponds to the model whose parameter values are most appropriate. The basic form of MHT has been widely employed in missile tracking where different motion models correspond to different types of maneuvers (Bar-Shalom and Fortmann 1988). Experience, however, shows that there are numerous practical problems with applying MHT. The performance of the algorithm is dependent upon a significant difference between the residual characteristics in the correct and the mismatched filters, which is sometimes difficult to establish. The three methods we have reviewed until now each has a number of short- comings, many of which stem from the fact that they use fundamentally the same principle. The methods all at each instance in time assumed that only one model is correct, and the multimodel fusion problem is to identify that model. However this includes the extremely strong assumption that all the other models, no matter how close they are to the current most appropriate model, provide no useful information whatever. These shortcomings motivate the development of a new strategy for fusing infor- mation from multiple models. This approach, known as model fusion, is described next. 4.3.4 Model Fusion The model fusion approach treats each process model as if it were a virtual sensor. The ’th process model is equivalent to a sensor which measures the predicted value v 7sÄ! VAWª # 5 3 e ! . The measurement is corrupted by noise (the prediction error) and the same principles and techniques as in multi-sensor fusion may be used. Figure 4.4 illustrates the method for two approximate systems. Each model is implemented in its own filter and, in the prediction step, each filter predicts the future state of the system. Since each filter uses a different model and a different state space, the predictions are different. Filter 1 (or 2) propagates its prediction (or some functions of it) to filter 2 (or 1) which treats it as if it were an observation. Each filter then updates using prediction and sensor information alike.
  • 19. 4.3 Methods of Multi-Model Estimation 19 Estimate from filter 1 Filter 1 È Ë Ê Output estimate Sensors Prediction from filter 1 Batch Ë É estimator Prediction from filter 2 Filter 2 Estimate from filter 2 Fig. 4.4. Model fusion architecture for two approximate systems. Each filter predicts the future state of the system using the same inputs, and updates using sensor information and the prediction propagated from the other filter. The process model is used to predict the future state of the system. Using the process model the prediction summarizes all the previous observation information. B C@ The estimate at is thus not restricted to using the information contained in . $ Ì # ! The prediction allow temporal fusion with data from all past measurements. The fusion of data is best appreciated by considering the information form (May- beck 1979) of the Kalman filter prediction. The amount of information maintained in ª an estimate is defined to be the inverse of its covariance matrix. With the Kalman gain , the covariance prediction may be rewritten as Í 76†! V f WP€ Í hTR! F f $! F f # 5 3 e ! # ! 3 # 5 3 e ! % # e ! (4.25) E FD $P€ Í ²«76R! V # ! 3 5 % # 5 3 e ! f $! F2 f # e ! (4.26) Using this result in combination with the Kalman gain in terms of the predicted covariance, (Maybeck 1979) E FD Í % $2PCW! F2 f # ! €# e ! $2¢¿ # ! (4.27) Ž where $¢¿ # ! is the measurement covariance. The state measurement update may be written as E FD E ”D $! Fhª % # e ! f Ï$! F f Î # e ! WP{V76†! VV£76R! V # ! € H # 5 3 e ! ª # 5 3 e ! W¢¿ # ! ~$ Ì Ð# ! (4.28) Ž A similar expression can be found for the predicted information matrix E ”D E FD E ”D W! F2 f # e ! % 76†! V f # 5 3 e ! ${PH # ! € $¢¿ # ! $P€ # ! (4.29) Ž The estimate in Equation (4.28) is therefore a weighted average of the predic- tion and the observation. Intuitively the weights must be inversely proportional to their respective covariances. Seen in the light of Equation (4.28) predictions and observations are treated equally by the Kalman filter. It is therefore possible to con- sider the prediction from filter 1 (2) as an observation, with covariance 7cy! FXª # 5 3 e ! E ”D f 76R! F # 5 3 e ! that can be used to update filter 2 (1).
  • 20. 20 4. Multiple Model Estimation Batch Estimator. This approach is fundamentally different from the other ap- proaches which are described in this chapter. Rather than assume that only one process model is correct, all the models are treated as valid descriptions of the true system. At some points in time one model might be better than another, but this does not mean that the poorer model yields no useful information. On the contrary, since all the models are capable of making consistent predictions, they all provide information. Each process model makes its own estimates of the interesting variables. The different estimates can be fused together efficiently and rigorously using a batch estimator (Bierman 1977). Define š $2 E # ! d $ Ñ % # ! 8  sG  ˜     (4.30) W t d • # ! › œš % Ò€ . . 8 (4.31) .  u•—˜ Ó $! F 9uig f # e ! Õ gÕ $! F ACig f # e ! Ö gÕ œš $! V › # e ! sG      › #$! F 9AÖ g f e ! Õ g Ó #$! F A1Ö g f e ! Ö g › $! V × Cig f gÕ # e ! × 1Ö g f  sG    g —–$! F X)f • % # e ! Ô Ô (4.32) — . . . . .. . . —˜ . . . .  #$! F2 Õ g × Xf e ! g $! F2 Ö g × Xf # e ! g sG      $! V × g × Xf # e ! g the estimates of interest are then given by E ”DE FD §$! F Gg f % # e ! g Á u$! F €# e ! Ø Ù€ À (4.33) Ô )Ô f ”D E %$ d # ! #W! Ñ ØÙ€C#W! eF2 gighf #$! F2 ! e ! (4.34) Ô )Ô f There are several advantages of this method. First, it is not necessary to assume that the real physical system switches from mode to mode, nor is it necessary to develop a rule to decide which mode should be used. Rather, the information is incorporated using the Kalman filter update equations. Suppose that filter 1 is the correct model, characterized by the fact that its (con- sistent) prediction has a smaller covariance than that of filter 2. Filter 2 then weights the predictions from filter 1 very highly in its update. Conversely, filter 1 weights the predictions from filter 2 by a very small fraction. The result is that the information which is most correct is given the greatest weight. This method is also capable of exploiting additional information which none of the other filters used. The Structure of Prediction Propagation. In general, the filters cannot directly propagate and fuse the entire prediction vectors with one another. The reason is that different models use different state spaces. One model might possess states which the other models do not have. Conversely, two models might have states which are related to one another through a transformation (for example two filters might estimate the absolute po- sition of two different points rigidly fixed to the same vehicle). Only information which can be expressed in a common space can be propagated between each filter.
  • 21. 4.3 Methods of Multi-Model Estimation 21 A model translator function u$2 E 0 41E ‘ ## ! E Ú propagates the prediction from filter “ 1 into a state space which is common to both filters. Similarly, the prediction from filter 2 is propagated using its own translator function. The fusion space for both filters consists of parameters which are the same in both filters. These parameters obey the condition that §1$ i 4Ú ‘ % # # ! Û E 1$ E i 41E ‘ # # ! Û E Ú (4.35) “ “ “ “ for all choices of the true state space vector #u#$!2qa$u¢bÝ%Ü#W!AWÛ and WqW # ! a . Candi- date quantities include – States which are common to both models (such as the position of the same point measured in the same coordinate system). – The predicted observations. – The predicted variables (states of interest). However, it is important to stress that many states do not obey this condition. This is because many states are lumped parameters: their value reflects the net effects of many different physical processes. Since different approximate systems use dif- ferent assumptions, different physical processes are lumped together in different parameters. Each filter treats the propagation from the other filter as a type of observation which is appended to the observation state space. Therefore, the observation vector which is received by filter 1 is W™S # ! àWÞ¡S ß % # ! E (4.36) 0 †Ú ‘ E ~uW á## ! “ “ “ The model fusion is a straightforward procedure. For each pair of models find a common state space in which the condition of Equation (4.35) hold. The observation space for each filter is augmented appropriately, and the application of the filter follows trivially. The model fusion system relies on the Kalman filter for its information propa- gation, and we will hence have to take account of that the prediction errors in the different models are correlated with one another in order to get consistent estimates. Figure 4.5 shows the information flows which occur when two models are fused. Information can be classified in two forms: information which flows from the out- side (and is independent) and that which flows within (the predictions which prop- agate between the filters). The external flows are indicated by dashed lines, internal flows by solid ones. Both flows play a key role in the prediction and the update. In the prediction step each filter uses its own process model to estimate the future state of the system. The process noises experienced by each model are not independent for two rea- sons. First, there is a component due to the true system process noise. Second, the modeling error terms for each filter are evaluated about the true system state. Since the process noises are correlated, the prediction errors are also correlated. Each filter
  • 22. 22 4. Multiple Model Estimation Prediction Process Process Update noise 1 model 1 Estimate Sensor data Prediction Process Process Update noise 2 model 2 Estimate Fig. 4.5. Information flow in the fused two-model case. The external (and independent) flows are indicated by dashed lines, internal flows (the predictions) by solid ones. updates using its own prediction, the prediction from the other filter, and the sensor information. Since the process noises are correlated, the prediction errors committed by both models â â #7563R E I`HV#T534! e»563R! E â E x764! F2 E â ! w % # 5 3 e ! (4.37) x764! F2 w % # 5 3 e ! `VT4! »6R I H # 5 3 e 5 3 ! 76R # 5 3 ! (4.38) “ “ “ “ are also correlated. The cross correlation is CT4! »6R E f E sT†! F E f w# 5 3 e 5 3 ! w % # 5 3 e ! T†W® E H # 5 3 ! ã (4.39) “ “ Ž“ where E ã is the cross correlation matrix between the two process models. “ As can be seen, the cross-correlations between the prediction errors evolve in a similar fashion to the covariance of each state estimate. First, the diffusion step scales the cross correlation, second, the correlation is reinforced by adding process noise correlation. More insight is gained by looking at a Taylor series expansion of the measure- ment vector given in Equation 4.36. G”TB £ e Á 7642¡Q À T†ph$™S ¾ E D # 5 3 ! E U ¼  3 # ! å$¢E © ä % # ! (4.40) # ! E Eâ Ú æiÁ W¢™ À 4uE ‘ 3 Á $ À 4Ú ‘ # ! E â #76R! F “ E $ E € â“ “ “ 5 3 e ! # ! ß ç% (4.41) á Á #7563†! eF!2 E À †uE ‘ è 3 Á T†! F À 4Ú ‘ è E Ú # 5 3 e ! E “ “ “ “ where represents the Jacobian, i.e. a linearization about the current state. 41E ‘ è E Ú “ Taking outer products leads to the following cross covariance equation
  • 23. 4.3 Methods of Multi-Model Estimation 23 $2 Ø E u76R! VÞAE f # ! €# 5 3 e ! E á 4Ú ‘ è TR! F E è TR! F2ÞAE ç764! F2 1uÕ hf # 5 3 e ! E f ß % # 5 3 e ! Õê é (4.42) E # 5 3 e ! f 3 †uE ‘ E Ú “ “ Ž “ “ Ž which shows how information flows between the two filters. The first component de- scribes how the observation information is injected. The second component, which determined the weight placed on the prediction propagated from filter 2, is the dif- ference between the covariance of filter 1, and the crosscorrelation between filters 1 and 2 all projected into the same space. In effect only the information which is unique to filter 2 is propagated to filter 1. As the two filters become more and more similar to one another, they become more and more tightly correlated. Less information is propagated from filter 2 and, in the limit when both filters use exactly the same process models, no information is passed between them at all. Model fusion does not invent information over and above what can be obtained from the sensors. Rather, it is a different way to exploit the observations. In the other limit, as the models become less and less correlated with one another, the predictions contain more and more independent information and so is assigned a progressively greater weight. The update enforces the corre- lation between the filters through the fact that the same observations are used to update both filters with the same observation noises. The rather complex cross correlations in the observation noise may be simplified by treating the network of filters and cross correlation filters as components of a single, all encompassing system or combined system. 4.3.5 Combined System The combined system is formed by stacking the state spaces of each approximate system. The combined state space vector is ë œš $ E › # ! › › $ # ! WÞW % # ! ë “ •— (4.43) . . —  . —˜ $2†™ # ! ì The dimension of ë is the sum of the dimensions of the subsystem filters. The covariance of the combined system is W! F2 AE f # e ! E $! V E f # e ! œš W! F2 E › # e ! › #W! F2 E f e ! #$! V “ f e ! Gs¹ ¹ ¹ › W! F2 t f # e ! t f $! FTë f % # e ! “ •— “ 1“ ¹Gs¹ ¹ “ — . . .. . (4.44) —˜ . . . . .  . . $! V¢E Xf # e ! t t hf $! F # e ! $! F ¡hf # e ! t t “ Gs¹ ¹ ¹ The diagonal subblocks are the covariances of the subsystems, and the off diagonal subblocks are the correlations between them. The combined system evolves over time according to a process model which is corrupted by process noise. These are given by stacking the components from each subsystem:
  • 24. To be continued .... “ “ Á 76† # 5 “ 3 ! U 3 “# ! “ À phW™S “ • (4.49)  Á W¢™ À 4uE ‘ # ! E E Ú 3 Á $ À 4Ú ‘ ˜ # ! E % # ! $27ë © š Á 76†¢™ # 5 3 ! E À QphW™S E U 3 # ! innovation vector becomes “ apart from a sign. Eliminating the redundant terms, the combined # ! Á $2 E À †uE ‘ E Ú “ “ “ Both innovation vectors have the same model fusion component 3 Á $ # ! À 4Ú ‘ E “ “ “ “ # ! á Á W À 4Ú E ‘ 3 Á $ E À 41E E Ú ‘ “ (4.48) “ “ # ! ß % # ! §$ © Á 76R # 5 3 ! À phW™S U 3 # ! “ “ “ “ # ! á Á W E À 4uE E Ú # ! ‘ 3 Á $ À 4Ú E ‘ (4.47) ß % # ! §$ E © Á# 5 3 ! C76R U 3 # ! E 2À E phW™S innovations for the two approximate systems are novation vectors for the two system case. Acting independently of one another, the which must be eliminated. This problem can be illustrated by considering the in- servation vectors from the individual subsystems can introduce redundant terms yield the combined system’s observation vector. However, simply stacking the ob- The observation vectors from the individual subsystems are also aggregated to “ # ! t W ¡t ã . ! #$2 t ñ$ E t ãã # ! .. îï ô . . . . î (4.46) . .. . . î . “ . “ A“ . íî % # ! “ å§$2 ë ã ó # ! t ¹sG¹ ¹ ! ã # ! #$2 “ ðW E ã ó $2 t ã ó ¹ ¹ sG¹ ó # ! iò $2 E ã # ! $2 E ðW AE ã ã # ! E The associated covariance is @ 8 # ! I # ! } # ! # B A9$ t 18 W)18 W t t '  . . —˜ (4.45) . — “ “ “ •— % @ 8 # ! I # ! } # e ! §# B A9$ ë A8 $2hA8 $! F ë 0 ë ' # ! I # ! } # ! › # B @ 8 $2 A8 $2hA8 $2 0 ' › # ! I # ! } # ! › š # B @ 8 $2 E A8 $2hA8 $2 E 0 E ' 4. Multiple Model Estimation 24
  • 25. REFERENCES 25 References Bar-Shalom, Y. and T. E. Fortmann (1988). Tracking and Data Assocations. The Academic Press. Bierman, G. J. (1977). Factorization Methods for Discrete Sequential Estimation, Volume 128 of Mathematics in Science and Engineering. Academic Press. Ewell, J. J. (1988). Space shuttle orbiter entry through land navigation. In IEEE International Conference on Intelligent Robots and Systems, pp. 627–632. Ljung, L. (1987). System Identification, Theory for the User. Prentice-Hall infor- mation and system sciences series. Prentice-Hall. Magill, D. D. (1965, October). Optimal adaptive estimation of sampled stochastic processes. IEEE transaction on automatic Control 10(4), 434–439. Maybeck, P. S. (1979). Stochastic Models, Estimation, and Control, Volume 1. Academic Press. Silvermann, L., S. Shookoohi, and P. V. Dooren (1983, August). Linear time- variable systems: Balancing and model reduction. IEEE transaction on auto- matic Control 28(8), 810–822.