SlideShare una empresa de Scribd logo
1 de 7
Wireless Indoor Localization with Dempster-Shafer
                         Simple Support Functions∗
                                     Vladimir Kulyukin             Amit Banavalikar         John Nicholson
                                             Computer Science Assistive Technology Laboratory
                                                    Department of Computer Science
                                                          Utah State University
                                                           Logan, Utah, U.S.A
                                                      {vladimir.kulyukin}@usu.edu


   Abstract— A mobile robot is localized in an indoor envi-
ronment using IEEE 802.11b wireless signals. Simple support
functions of the Dempster-Shafer theory are used to combine ev-
idence from multiple localization algorithms. Emperical results
are presented and discussed. Conclusions are drawn regarding
when the proposed sensor fusion methods may improve perfor-
mance and when they may not.

  Index Terms— localization, sensor fusion, Dempster-Shafer
theory

                         I. I NTRODUCTION
   In May 2003, the Assistive Technology Laboratory of
the Department of Computer Science (CS) of Utah State
Univeristy (USU) and the USU Center for Persons with
Disabilities (CPD) started a collaborative project whose ob-
jective is to build an indoor robotic guide for the visually
impaired in dynamic and complex indoor environments, such
as grocery stores and airports. A proof-of-concept prototype
has been deployed in two indoor environments: the USU CS
Department and the USU CPD. The guide’s name is RG,
which stands for “robotic guide.”

A. RFID-based localization                                                            Fig. 1.   RG: A Robotic guide for the visually impaired.
   RG, shown in Fig. 1, is built on top of the Pioneer 2DX
commercial robotic platform from the ActivMedia Corpo-
ration (See Fig. 1). What turns the platform into a robotic                   objects in the environment or worn on clothing. They do not
guide is a Wayfinding Toolkit (WT) mounted on top of the                       require any external power source or direct line of sight to
platform and powered from the on-board batteries. As can                      be detected by the RFID reader. The tags are activated by
be seen in Fig. 1, the WT resides in a polyvinyl chloride                     the spherical electromagnetic field generated by the RFID
(PVC) pipe structure and includes a Dell T M Ultralight X300                  antenna with a radius of approximately 1.5 meters. Each tag
laptop connected to the platform’s microcontroller, a laser                   is programmatically assigned a unique ID.
range finder from SICK, Inc., and to a radio-frequency                            RFID tags are viewed as stimuli that trigger or dis-
identification (RFID) reader. The TI Series 2000 RFID reader                   able specific behaviors, e.g., follow-wall, turn-left, turn-right,
is connected to a square 200mm × 200mm antenna. The                           avoid-obstacle, make-u-turn, etc. The robot’s knowledge base
upper left part of Fig. 1 depicts a TI RFID Slim Disk                         consists of a connectivity graph of the environment, tag to
tag attached to a wall. These tags can be attached to any                     destination mappings, and behavior trigger/disable scripts as-
                                                                              sociated with specific tags. Each node of the graph represents
  ∗ This work is supported, in part, by NSF Grant IIS-0346880 and, in part,
                                                                              a location marked with a tag. The robot’s location with
by two Community University Research Initiative (CURI) grants (CURI
2003 and CURI 2004) from the State of Utah. Copyright c 2005 USU              respect to the graph is updated as soon as RG detects a tag.
Computer Science Assistive Technology Laboratory (CSATL).                        During experimental runs described elsewhere [6], the
RFID tags were successfully detected with the exception of         been much debated in the literature [17], [21], [5]. Attempts
three runs in crowded environments. During these runs, the         were made to reduce DST to the fundamental axioms of
robot missed a total of five RFID tags, because it had to           classical probability theory [10]. However, belief functions,
navigate around groups of people standing near the tags. The       a fundamental concept underlying DST, were shown not to
detection failures happened when the tags were outside of          be probability distributions over sample spaces [13].
the effective range of the robot’s RFID antenna. The robot            DST was chosen for three reasons. First, in DST, it
successfully navigated around each group of people using its       is unnecessary to have precise a priori probabilities. This
obstacle avoidance routines. However, the obstacle avoidance       was considered an advantage, because the propagation of
maneuver would put a blocked tag outside of the RFID               wireless signals indoors is affected by dead spots, noise,
antenna’s electromagnetic sphere, which caused the robot to        and interference. Second, Laplace’s Principle of Insufficient
miss an important maneuver, e.g., turn in the right direction      Reason, i.e., a uniform distribution of equal probability to all
or make a u-turn [8]. Consequently, the robot would become         points in the unknown sample space, is not imposed and,
lost and would have to stop and re-plan its path after detecting   as a consequence, there is no axiom of additivity. Third,
that it had become lost.                                           DST evidence combination rules have terms indicating when
                                                                   multiple observations disagree.
B. Wireless localization
   To overcome RFID detection failures in crowded environ-         C. Related work
ments, it was decided to supplement RFID-based localization           The research presented in this paper contributes to the body
with wireless localization. The working hypothesis was that        of work on indoor localization done by assistive technology
indoor localization can be done by using wireless signals          and robotics researchers. Ladd et al. [12] used Bayesian
already available in many indoor environments due to the           reasoning combined with Hidden Markov Models (HMMs)
ubiquitous use of wireless Wi-Fi (IEEE 802.11b) Ethernet           to determine the orientation and position of a person using
networks. One advantage of this approach is that it does not       wireless 802.11b signals. The person wore a laptop with a
require any modification of the environment, e.g., deployment       wireless card and was tracked in an indoor environment. The
of extra sensors or chips, which may disrupt routine activities    assumption was made that people were minimally present in
of organizations and expose the robot to potential vandalism.      the environment.
   It should be noted that wireless localization is similar to        Serrano [19] uses IEEE 802.11b wireless network signals
RFID-based localization in that it localizes the robot to a        to determine the position of a robot inside a building. The
location. No attempt is made to determine the robot’s pose         conducted experiments show that wireless indoor localization
(x, y, θ). In keeping with the principles of the Spatial Seman-    may not be possible without a preconstructed sensor signal
tic Hierarchy [9] on which RG’s knowledge representation is        map. However, if a motion model is available, Markov
based, once the robot is localized to a location, the location     localization techniques can be used to localize the robot
specific behavior scripts are triggered to achieve a global         accurately. Howard et al. [4] also investigated the use of
navigation objective [7].                                          Markov localization techniques in wireless robot localization.
   Kismet, an open source wireless network analyzer, was              Talking SignsT M is an infrared localization technology
used to detect and digitize wireless signal strengths. The         developed at the Smith-Kettlewell Eye Research Institute in
software runs on the robot’s Dell T M Ultralight X300 laptop       San Francisco [2]. The system is based on infrared sensors
equipped with the Orinoco T M Classic Gold PC 802.11b card.        and operates like the infrared remote control device for
D-LinkT M 802.11b/2.4GHz wireless access routers were              television channel selection. Infrared beams carry speech
used as access points, i.e., signal sources. A set of locations    signals embedded in various signs to hand-held receivers that
is selected in a target environment. The wireless signature of     speak those signals to users. Marston and Golledge [14] used
each location consists of a vector of signal strengths from        Talking SignsT M in their Remote Infrared Audible Signage
each access point detected at that landmark. At run time,          (RIAS) system. RIAS was installed at the San Francisco
signal strengths are classified to a location.                      CalTrain station to conduct several field tests with legally
   While much effort has been put into modelling wireless          blind individuals.
radio signals, no single consistent model exists that can             The BAT system is an indoor localization system de-
reliably describe the behavior of wireless signals indoors[12].    veloped at the AT&T Cambridge Research Laboratory [1].
Consequently, it was decided to use sensor fusion to localize      The system uses ultrasonic sensors that are placed on the
the robot. Sensor fusion is a post-processing technique that       ceiling to increase coverage and obtain sufficient accuracy.
combines and refines initial sensor readings. The Dempster-         The receiver detects ultrasonic signals and uses triangula-
Shafer theory (DST) of evidence [20] was chosen as a               tion to position itself. The Atlanta Veterans Administration
theoretical framework for sensor fusion. The relative advan-       (VA) R&D Center proposed the concept of Talking Braille
tages and disadvantages of DST and Bayesian theory have            infrastructure [18]. Talking Braille is a method for providing
access to Braille/Raised Letter (BRL) signage at a distance.         as its subset. Formally, a simple support function S : 2 Θ →
Talking Braille is an adaptation of electronic infrared badge        [0, 1], A = , A ∈ Θ, is defined as S(B) = 0, if ¬(A ⊆ B);
technology developed by Charmed Technologies, Inc. The               S(B) = s, 0 ≤ s ≤ 1, if A ⊆ B, and B = Θ; S(B) = 1,
infrastructure consists of small digital circuits embedded in        if B = Θ. If S is focused on A, S’s BPAs are defined as
standard BRL signs. Small badges worn by users remotely              follows: m(A) = S(A); m(Θ) = 1 − S(A); m(B) = 0,
trigger signs in the user’s vicinity. Using buttons on the           B = A and B ∈ Θ. A separable support function is the
badge, the user requests that signs either voice their message       orthogonal sum of two or more simple support functions.
or transmit their message to the user’s device over an infrared         Simple support functions can be homogeneous or hetero-
beam.                                                                geneous. Homogeneous simple support functions focus on
   As regards sensor fusion, the research presented here             the same subset of Θ, whereas heterogeneous simple support
contributes to the body of work done by robotics researchers         functions focus on different subsets of Θ.
who used DST to fuse information from multiple robotic                  Let S1 and S2 be two simple support functions focused on
sensors. In particular, Murphy [16] used DST as a framework          A so that S1 (A) = s1 and S2 (A) = s2 . It can be shown that
for the Sensor Fusion Effects (SFX) architecture. In the SFX,        the BPA m corresponding to S 1 ⊕ S2 is defined as follows:
the robot’s execution activities used DST beliefs generated          m(A) = 1 − (1 − s1 )(1 − s2 ) and m(Θ) = (1 − s1 )(1 − s2 ).
from a percept to either proceed with a task, terminate the          If S1 is focused on A and S 2 is focused on B = A, then
task, or conduct more sensing. Other robotics researchers also       it can be shown that the BPA m corresponding to S 1 ⊕ S2
used DST for sensor fusion [3].                                      depends on whether A ∩ B = . If A ∩ B = , m(A) =
   The remainder of this paper is organized as follows.              s1 (1 − s2 ); m(A ∩ B) = s1 s2 ; m(B) = s2 (1 − s1 ); and
First, a brief review of the salient aspects of DST is given.        m(Θ) = (1 − s1 )(1 − s2 ), which gives rise to the following
Second, the details of the proposed approach to wireless             support function:
indoor localization are presented. Third, the results of the                             ⎧
experiments are discussed.                                                               ⎪ 0
                                                                                         ⎪
                                                                                         ⎪
                                                                                         ⎪ s1 s2
                                                                                         ⎪
                                                                                         ⎪
              II. D EMPSTER -S HAFER T HEORY                                             ⎨
                                                                                             s1
                                                                                  S(C) =                                        (2)
   In DST, knowledge about the world is represented as a                                 ⎪ s2
                                                                                         ⎪
                                                                                         ⎪
                                                                                         ⎪ 1 − (1 − s1 )(1 − s2 )
set of elements, Θ, called the frame of discernment (FOD).                               ⎪
                                                                                         ⎪
                                                                                         ⎩
Each element of Θ corresponds to a proposition. For example,                                 1                       .
Θ = {θ1 , θ2 } can be a FOD for a coin tossing experiment               The first case arises when ¬(A ∩ B ⊆ C); the second case
so that θ1 is heads and θ2 is tails. Each subset of Θ can be         arises when A ∩ B ⊆ C ∧ ¬(A ⊆ C) ∧ ¬(B ⊆ C); the
assigned a number, called its basic probability number, that         third case arises when A ⊆ C ∧ ¬(B ⊆ C); the fourth case
describes the amount of belief apportioned to it by a reasoner.      arises when B ⊆ C ∧ ¬(A ⊆ C); the fifth case arises when
   The assignment of basic probability numbers is governed           A ⊆ C, B ⊆ C∧ = Θ; the sixth case arises when C = Θ.
by a basic probability assignment (BPA) m : 2 Θ → [0, 1] so             If A ∩ B = , S1 ⊕ S2 has the following BPA: m(A) =
that m( ) = 0 and ΣA⊆Θ m(A) = 1. Each BPA describes a                s1 (1 − s2 )/(1 − s1 s2 ); m(B) = s2 (1 − s1 )/(1 − s1 s2 );
belief function over Θ. A subset A of Θ is a focal point of a        m(Θ) = (1 − s1 )(1 − s2 )/(1 − s1 s2 ), which corresponds
belief function Bel if m(A) > 0. Suppose that m 1 and m2             to the following support function:
are two BPAs for two belief functions Bel 1 and Bel2 over Θ,
respectively. Let A1 , A2 , ..., Ak , k > 0 be the focal points of           ⎧
Bel1 and B1 , B2 , ..., Bn , n > 0 be the focal points of Bel 2 .            ⎪ 0
                                                                             ⎪
                                                                             ⎪
                                                                             ⎪ s1 (1 − s1 )/(1 − s1 s2 )
Then Bel1 and Bel2 can be combined through the orthogonal                    ⎨
sum Bel1 ⊕ Bel2 whose BPA is defined as follows:                       S(C) =   s2 (1 − s1 )/(1 − s1 s2 )                        (3)
                                                                             ⎪
                                                                             ⎪ (s1 (1 − s2 ) + s2 (1 − s1 ))/(1 − s1 s2 )
                                                                             ⎪
                                                                             ⎪
                                                                             ⎩
                       ΣAi ∩Bj =A m1 (Ai )m2 (Bj )                             1
          m(A) =                                              (1)
                     1 − ΣAi ∩Bj = m1 (Ai )m2 (Bj )
                                                                        The first case arises when ¬(A ⊆ C) ∧ ¬(B ⊆ C); the
   Once the pairwise rule is defined, one can orthogonally            second case arises when A ⊆ C ∧ ¬(B ⊆ C); the third case
sum several belief functions. A fundamental result of the DST        arises when B ⊆ C ∧ ¬(A ⊆ C); the fourth case arises when
is that the order of the individual pairwise sums has no impact      A ⊆ C ∧ B ⊆ C ∧ C = Θ; the fifth case arises when C = Θ.
on the overall result [20].
   A simple support function S provides evidential support                         III. W IRELESS L OCALIZATION
for one specific subset A of Θ. S is said to be focused on               The target environment for localization experiments was
A. The function provides no evidential support for any other         the USU CS Department. The department occupies an indoor
subset of Θ unless that set is implied by A, i.e., contains A        area of approximately 6,590 square meters. The floor contains
example, if a hall’s orientation was from north to south, two
                                                                  sets of samples were collected: one facing north, the other
                                                                  facing south. A set of samples consisted of two minutes
                                                                  worth of data. An individual sample was a set of five
                                                                  wireless signal strengths, one from each wireless access
                                                                  point in the department. Samples were collected at a rate of
                                                                  approximately one sample every ten microseconds. Different
                                                                  sets of data for a single collection position were collected on
                                                                  different days in order to see a wider variety of signal strength
                                                                  patterns. Each collection position and direction combination
                                                                  had 10 total sets of data, which amounted to a total of twenty
                                                                  minutes worth of data. Therefore, the total data collection
       Fig. 2.   Wi-Fi access points at the USU CS Department.    time was 260 minutes, which resulted in a total of 1,553,428
                                                                  samples. These samples were used for training purposes.
                                                                     To obtain the validation data, RG was made to navigate
                                                                  the route that contained all the selected locations 5 times in
                                                                  each direction. Four pieces of masking tape were placed at
                                                                  each collection position: two at 0.5 meter from the collection
                                                                  position and two at 1 meter from the collection position. The
                                                                  pieces of tape marked the proximity to the collection position,
                                                                  i.e., the robot is within 0.5 meter of the collection position
                                                                  and the robot is within 1 meter of the collection position. As
                                                                  the robot crossed a tape, a human operator following the robot
                 Fig. 3.   Data collection at a location.         would press a key on a wearable keypad to mark this event
                                                                  electronically. Thus, in the validation file, the readings at each
                                                                  position were marked with the proximity to that position.
23 offices, 7 laboratories, a conference room, a student           Unlike in the wireless localization experiments conducted
lounge, a tutor room, two elevators, several bathrooms, and       by Ladd et al. [12], people were present in the environment
two staircases.                                                   during the robot runs.
   Five wireless access points were deployed at various           A. Localization algorithms
offices in the USU CS Department. The offices are shown                The following algorithms were used for localization:
in Fig. 2 with black circles. The offices were selected on         Bayesian, C4.5, and an artificial neural network (ANN) [15].
the basis of their availability. No other strategy was used       The Bayesian algorithm considered the access points to be
for choosing the offices. Five locations were then selected.       independent of each other. At each location, the priors were
Each location was at a corner. Corners were selected because      acquired for the probabilities of specific signal strengths
in indoor environments they are very useful decision points.      from each sensor at that location, i.e., P (s i |L), where si
In Fig. 2, the locations are shown as circles with crosses.       is the signal strength from the i-th sensor at location L.
Each location had several (two or more) collection positions      At run time, the standard Bayes rule was used to classify
marked. A collection position was the actual place where          received signal strengths with respect to a specific location.
wireless signal strengths were collected. Each collection posi-   The C4.5 algorithm inductively constructed a decision tree
tion was located 1.5 meters away from a corner. Fig. 3 shows      for classifying the signal strengths into five locations. One
how wireless signal strength data were collected at a hall        backpropagation ANN was trained for each location. Each
corner. The bullets represent three collection positions. The     ANN had 5 input nodes, i.e., 1 node for each access point,
width of the hall determined how many collection positions        2 hidden layers of 10 nodes each, and 1 output node. At
were needed. If the hall was narrow (width < 2 meters),           run time, the outputs from each ANN were taken and the
only one collection position was chosen in the middle of the      final classification was decided by the activation levels of
hall. If the hall was wider than 2 meters, then there were        the output nodes of the individual ANNs. The winner ANN
two collection positions, which were positioned to divide the     determined the result location.
hall width into thirds. A total of 13 collection positions was
chosen for the five selected locations. Thus, each location        B. Two evidence combination algorithms
corresponded to at least two collection positions.                  Evidence from the algorithms was combined as follows.
   Two sets of samples were taken at each collection position,    Let Θ = {L1 , L2 , L3 , L4 , L5 }, where Li , 1 ≤ i ≤ 5,
one for each direction of the hall’s orientation. So, for         corresponds to the proposition that the robot is at location
Algorithm                  Position
Li . Let X be a vector of wireless signal strength readings                                   1       2        3        4       5
such that X = [s1 , s2 , s3 , s4 , s5 ], where 0 ≤ si ≤ 130.                     BAY         0.98    0.95    0.79      0.65   0.91
Let A be a localization algorithm such that X is its input                       C45         0.94    0.95    0.77      0.67   0.95
                                                                                 ANN         0.98    0.94    0.81      0.72   0.88
so that A(X) ∈ Θ, i.e., the output of A is a possibly
                                                                                 DST1        1.00    0.97    0.84      0.84   0.99
empty set of locations. Let T be the target location, i.e., the                  DST2        1.00    0.98    0.79      0.67   0.99
current location of the robot. Let all available algorithms be
enumerated as A1 , ..., An , n > 0.                                                                 TABLE I
   The performance of each localization algorithm at L i can                             TABLE I: PPV AT 0.5 METER .
                                                  Ai
be represented as a simple support function S B={Li } , where
B = {Li } is the focus of S and A i is a localization algorithm.                 Algorithm                  Position
For example, if there are five locations and three localization                                1       2        3        4       5
algorithms, there are fifteen simple support functions: one                       BAY         0.93    0.91    0.82      0.68   0.91
simple support function for each location and each localiza-                     C45         0.87    0.91    0.78      0.64   0.95
                                                                                 ANN         0.92    0.93    0.82      0.67   0.89
tion algorithm.                                                                  DST1        0.72    0.89    0.82      0.80   0.99
   At run time, given X, A j (X) is computed for each L i                        DST2        0.91    0.90    0.81      0.68   0.97
and for each localization algorithm A j . If Aj (X) is greater
                      Aj                                                                            TABLE II
than the threshold, S {Li } ({Li }) = sij , where sij is the basic
                                   Aj                                                    TABLE II: PPV AT 1.0 METER .
probability number with which    S {Li }   supports its focus. Oth-
          Aj
erwise, S{Li } ({Li }) = 0. The support for L i is computed as
  A1            Aj
S{Li } ⊕...⊕S{Li} . After such orthogonal sums are computed
                                                                         Let T P , T N , F P , and F N be the number of true
for each location, the location whose orthogonal sum gives it
                                                                      positives, true negatives, false positives, and false negatives,
the greatest support is selected. This method of combination
                                                                      respectively. Using T P , T N , F P , and F N , one can define
is called homogeneous insomuch as the orthogonal sums are
                                                                      four evaluation statistics: sensitivity, specificity, positive pre-
computed of simple support functions with the same focus.
                                                                      dictive value (PPV), negative predictive value (NPV) [11].
   There is another possibility of evidence combination. From
                                                                      Sensitivity, T P/(T P + F N ), estimates the probability of A
preliminary tests it is possible to find the best localization
                                                                      saying that the signal receiver is at location L given that
algorithm for each location according to some criterion C.
                                                                      the signal receiver is at location L, i.e., P [A(X) = L|T =
Suppose that A1 , ..., An are the best localization algorithms
                                                                      L]. Specificity, defined as T N/(T N + F P ), estimates the
for each of the n locations. Note that the same algorithm
                                                                      probability of A saying that the signal receiver is not at L
can be best for several locations. Suppose further that
                                                                      given that the signal receiver is not at L, i.e., P [A(X) =
these algorithms are represented as simple support function
                                                                      L|T = L]. PPV, defined as T P/(T P + F P ), estimates the
S{L1 } , ..., S{Ln } . Given X, Ai (X) is computed for each L i ,
                                                                      probability that the receiver is at L given that A says that the
where Ai (X) is the output of the best algorithm for L i . If
                                                                      receiver is at L, i.e., P [T = L|A(X) = L]. Finally, NPV,
Ai (X) is greater than some threshold, S {Li } ({Li }) = si .
                                                                      defined as TN/(TN + FN), estimates the probability that the
Once each of the n support degrees are computed, the
                                                                      signal receiver is not at L given that the algorithm says that
orthogonal sum S = S {L1 } ⊕ ... ⊕ S{Ln } is computed. The
                                                                      the receiver is not at L, i.e., P [T = L|A(X) = L].
result sum is heterogeneous, because each simple support
                                                                         The PPV was chosen as the metric for computing basic
function has a different focus. The best location is the
                                                                      probability numbers, because it simulates the run-time per-
location with the highest degree of support according to S.
                                                                      formance of a localization algorithm. In particular, the PPV
                                                                      estimates the likelihood of the signal receiver being at L
C. Assigning basic probability numbers
                                                                      when the algorithm states that the receiver is at L.
   If one is to represent each localization algorithm as a
simple support function, the question arises as to how to                                    IV. E XPERIMENTS
assign the basic probability numbers with which each simple              Tables I and II show the PPV numbers computed from the
support function supports the location on which it is focused.        robot’s validation runs. Table I shows the PPV numbers for
One possibility is to compute the basic probability numbers in        the 0.5 meter proximity and Table II shows the PPV numbers
terms of true and false positives and true and false negatives.       for the 1 meter proximity. In both tables, DST1 denotes the
A true positive is defined as A(X) = L and T = L, where                homogeneous combination of simple support functions while
T is the true location and L is a location output by the              DST2 denotes the heterogeneous combination. To analyze the
algorithm. A true negative is defined as A(X) = L and                  results, it was agreed to discretize the performance R of each
T = L. A false positive is defined as A(X) = L and T = L.              algorithm into three intervals: strong (0.90 ≤ R), average
A false negative is defined as A(X) = L and T = L.                     (0.80 ≤ R < 0.90), and weak (R < 0.80).
The following observations were made. First, when the           term sensor fusion in the presented conclusions refers only
performance of all three algorithms is strong, DST1 and            to the sensor fusion methods described in this paper, i.e.,
DST2 either maintained the same level of performance or            when the fused algorithms are represented as DST simple
slightly improved it. For example, Table I column 1 shows          support functions that subsequently are fused homogeneously
that, at location 1, the three algorithms, i.e., Bayesian, C4.5,   or heterogeneously.
and ANN, performed at 0.98, 0.94, and 0.98, respectively.
The performance numbers for DST1 and DST2 at the same                                          R EFERENCES
location and proximity are both 1.0. The same behavior can
                                                                   [1] M. Addlesee, R. Curwen, S. Hodges, J. Newman, P. Steggles, and A.
be observed within 0.5 meter and 1 meter of location 2. As             Ward, “Implementing a Sentient Computing System,” IEEE Computer,
shown in Table I column 2, within 0.5 meter of location 2,             pp. 2-8, August 2001.
the three algorithms performed at 0.95, 0.95, and 0.94. At         [2] R.G. Golledge, J.R. Marston, and C.M. Costanzo, “Assistive Devices
                                                                       and Services for the Disabled: Auditory Signage and the Accessible City
the same location and proximity, DST1 performed at 0.97                for Blind and Vision Impaired Travelers,” Technical Report UCB-ITS-
and DST2 at 0.98. As shown in Table II column 2, within                PWP-98-18: Department of Geography, University of California Santa
1 meter of location 2, the three algorithms performed at               Barbara, 1998.
                                                                   [3] T. Henderson and E. Shilcrat, “Logical Sensor Systems,” Journal of
0.91, 0.91, and 0.93. At the same location and proximity,              Robotic Systems, 2(1):169-193, 1984.
DST1 performed at 0.89 and DST2 at 0.9. Second, when               [4] A. Howard, S. Siddiqi, and G. S. Sukhatme, “An experimental study of
all three algorithms performed weakly, DST1 significantly               localization using wireless Ethernet,” The 4th International Conference
                                                                       on Field and Service Robotics, July 2003, Lake Yamanaka, Japan.
improved performance, while DST2 remained on the same              [5] I. Kramosil, Probabilistic Analysis of Belief Functions, Kluwer Aca-
weak level. For example, Table I column 4 shows that within            demic Publishers: New York, NY, 2001.
0.5 meter of location 4, the three algorithms performed at         [6] V. Kulyukin, C. Gharpure, J. Nicholson, and S. Pavithran, “RFID in
                                                                       robot-assisted indoor navigation for the visually impaired,” IEEE/RSJ
0.65, 0.67, and 0.72. At the same location and proximity,              Intelligent Robots and Systems (IROS 2004) Conference, September -
DST1 achieved 0.84, a significant improvement, while DST2               October 2004, Sendai, Japan: Sendai Kyodo Printing Co.
remained at 0.67. Similarly, as shown in Table II column           [7] V. Kulyukin, C. Gharpure, N. De Graw, J. Nicholson, S. Pavithran,
                                                                       “A Robotic guide for the visually impaired in indoor environments,”
4, within 1 meter of location 4, the performance levels of             Rehabilitation Engineering and Assistive Technology Society of North
the three algorithms were 0.68, 0.64, and 0.67. At the same            America (RESNA 2004) Conference, June 2004, Orland, FL: Avail. on
location and proximity, DST1 achieved 0.80, a substantial              CD-ROM.
                                                                   [8] V. Kulyukin, C. Gharpure, P. Sute, N. De Graw, J. Nicholson, and S.
improvement, while DST2 remained at 0.68. Third, when                  Pavithran, “A Robotic wayfinding system for the visually impaired,”
two algorithms performed strongly and one averagely, DST2              Innovative Applications of Artificial Intelligence (IAAI-04) Conference,
improved the overall performance or kept on the same level             July 2004, San Jose, CA: AAAI/MIT Press.
                                                                   [9] B. Kupiers, “The Spatial Semantic Hierarchy,” Artificial Intelligence ,
while DST1 behaved inconsistently. For example, as shown               119:191-233, 2000.
in Table II column 1, DST2 remained on the same level,             [10] H.E. Kyburg, “Bayesian and Non-Bayesian Evidential Updating,”
while DST1’s performance worsened. However, as shown in                Artificial Intelligence, 31(3):271-293, 1987.
                                                                   [11] http://www.medicine.uiowa.edu/Path_Handbook/,
Table I column 5 and Table II column 5, both DST1 and                  Laboratory Services Handbook, Department of Pathology, The
DST2 raised the performance level significantly at location             University of Iowa.
5. Fourth, localization at the proximity of 0.5 meter was          [12] A.M. Ladd, K. Bekris, A. Rudys, G. Marceau, L. Kavraki, and D.
                                                                       Wallach, “Robotics-Based Location Sensing using Wireless Ethernet,”
overall better than at the proximity of 1 meter, because the           Eighth Annual International Conference on Mobile Computing and
location areas were further apart and the wireless signals were        Networking (MobiCom), September 2002, Atlanta, GA: ACM.
not confounded. Fifth, the localization performance dropped        [13] J.F. Lemmer, “Confidence Faction, Empiricism, and the Dempster-
                                                                       Shafer Theory of Evidence,” in Uncertainty in Artificial Intelligence,
at locations 3 and 4, because the locations were only 3                L.N. Kanal and J.F. Lemmer, Eds. Elsevier Scientific Publishers: Ams-
meters apart from each other and cross misclassification was            terdam, The Netherlands, 1986.
frequently observed.                                               [14] J.R. Marston and R. G. Golledge, “Towards an Accessible City:
                                                                       Removing Functional Barriers for the Blind and Visually Impaired: A
                                                                       Case for Auditory Signs,” Technical Report: Department of Geography,
                      V. C ONCLUSION                                   University of California at Santa Barbara, 2000.
                                                                   [15] T. Mitchell, Machine Intelligence, McGraw Hill: New York, NY, 1997.
   The following tentative conclusions can be made from the        [16] R.R. Murphy, “Dempster-Shafer Theory for Sensor Fusion in Au-
above observations. First, when all algorithms whose outputs           tonomous Mobile Robots,” IEEE Transactions on Robotics and Automa-
                                                                       tion, 14(2), April 1998.
are fused perform strongly, the addition of sensor fusion is       [17] J. Pearl, Probabilistic Reasoning in Intelligent Systems: Networks of
likely to improve the overall performance and move it to 1.0.          Plausible Inference, Morgan Kaufmann: San Mateo, CA, 1988.
When all algorithms perform weakly, homogeneous sensor             [18] D.A. Ross, “Talking Braille: Making Braille Signage Accessible at a
                                                                       Distance,” Rehabilitation Engineering and Assistive Technology Society
fusion is likely to improve performance significantly. Third,           of North America (RESNA-2004) Conference, Orlando, FL, June 2004.
if possible, locations should be selected further apart so as      [19] O. Serrano, “Robot localization using wireless networks,” Technical
not to confound wireless signals. It should be noted that these        Report: Departmento de Informatica, Estadistica y Telematica, Univer-
                                                                       sidad Rey Juan Carlos, Mostoles, Spain, 2003.
conclusions apply only to wireless localization indoors and        [20] G. Shafer, A Mathematical Theory of Evidence, Princeton University
are not to be interpreted as general recommendations. The              Press: Princeton University, 1976.
[21] P. Smets, “The Combination of Evidence in the Transferrable Belief
    Model,” IEEE Transactions on Pattern Analysis and Machine Intelli-
    gence, 12:447-458, 1990.

Más contenido relacionado

La actualidad más candente

Main project (touch screen based robotic vehicle)
Main project (touch screen based robotic vehicle)Main project (touch screen based robotic vehicle)
Main project (touch screen based robotic vehicle)VK RM
 
Zigbee Controlled Multi Functional Surveillance Spy Robot for Military Applic...
Zigbee Controlled Multi Functional Surveillance Spy Robot for Military Applic...Zigbee Controlled Multi Functional Surveillance Spy Robot for Military Applic...
Zigbee Controlled Multi Functional Surveillance Spy Robot for Military Applic...Associate Professor in VSB Coimbatore
 
Mobile control robot
Mobile control robotMobile control robot
Mobile control robotSachin Malik
 
Autonomous military robot with short range radar and guidance system
Autonomous military robot with short range radar and guidance system Autonomous military robot with short range radar and guidance system
Autonomous military robot with short range radar and guidance system SatyamShivansh
 
Presentation IOT Robot
Presentation IOT RobotPresentation IOT Robot
Presentation IOT RobotVatsal N Shah
 
Women Safety Night Patrolling Robot Using IOT
Women Safety Night Patrolling Robot Using IOTWomen Safety Night Patrolling Robot Using IOT
Women Safety Night Patrolling Robot Using IOTDr. Amarjeet Singh
 
Mobile controlled robot using bluetooth module (HC-005)
Mobile controlled robot using bluetooth module (HC-005)Mobile controlled robot using bluetooth module (HC-005)
Mobile controlled robot using bluetooth module (HC-005)Sachin Malik
 
Android Controlled Arduino Spy Robot
Android Controlled Arduino Spy RobotAndroid Controlled Arduino Spy Robot
Android Controlled Arduino Spy RobotMahesh Tibrewal
 
Wireless bomb disposal robot ppt
Wireless bomb disposal robot pptWireless bomb disposal robot ppt
Wireless bomb disposal robot pptAbhishek Gupta
 
WIRELESS ROBOT PPT
WIRELESS ROBOT PPTWIRELESS ROBOT PPT
WIRELESS ROBOT PPTAIRTEL
 
Cell Phone Operated Robot for Search and Research of an Object
Cell Phone Operated Robot for Search and Research of an ObjectCell Phone Operated Robot for Search and Research of an Object
Cell Phone Operated Robot for Search and Research of an ObjectNikita Kaushal
 
WIRELESS ROBOT
WIRELESS ROBOTWIRELESS ROBOT
WIRELESS ROBOTAIRTEL
 
Internet of Things (Iot) Based Robotic Arm
Internet of Things (Iot) Based Robotic ArmInternet of Things (Iot) Based Robotic Arm
Internet of Things (Iot) Based Robotic ArmIRJET Journal
 
Wireless Bomb Disposal Robot
Wireless Bomb Disposal RobotWireless Bomb Disposal Robot
Wireless Bomb Disposal RobotAbhishek Gupta
 
Project seminar for group
Project seminar for groupProject seminar for group
Project seminar for groupuche55nna
 
War Field Spying Robot with Night Vision Wireless Camera
War Field Spying Robot with Night Vision Wireless CameraWar Field Spying Robot with Night Vision Wireless Camera
War Field Spying Robot with Night Vision Wireless CameraEdgefxkits & Solutions
 
IRJET - Wireless Military Defense Robot
IRJET -  	  Wireless Military Defense RobotIRJET -  	  Wireless Military Defense Robot
IRJET - Wireless Military Defense RobotIRJET Journal
 
Hydromodus- An Autonomous Underwater Vehicle
Hydromodus- An Autonomous Underwater VehicleHydromodus- An Autonomous Underwater Vehicle
Hydromodus- An Autonomous Underwater VehicleJordan Read
 

La actualidad más candente (20)

Main project (touch screen based robotic vehicle)
Main project (touch screen based robotic vehicle)Main project (touch screen based robotic vehicle)
Main project (touch screen based robotic vehicle)
 
Zigbee Controlled Multi Functional Surveillance Spy Robot for Military Applic...
Zigbee Controlled Multi Functional Surveillance Spy Robot for Military Applic...Zigbee Controlled Multi Functional Surveillance Spy Robot for Military Applic...
Zigbee Controlled Multi Functional Surveillance Spy Robot for Military Applic...
 
Mobile control robot
Mobile control robotMobile control robot
Mobile control robot
 
Human Detection Robot
Human Detection Robot Human Detection Robot
Human Detection Robot
 
Autonomous military robot with short range radar and guidance system
Autonomous military robot with short range radar and guidance system Autonomous military robot with short range radar and guidance system
Autonomous military robot with short range radar and guidance system
 
Presentation IOT Robot
Presentation IOT RobotPresentation IOT Robot
Presentation IOT Robot
 
Women Safety Night Patrolling Robot Using IOT
Women Safety Night Patrolling Robot Using IOTWomen Safety Night Patrolling Robot Using IOT
Women Safety Night Patrolling Robot Using IOT
 
Mobile controlled robot using bluetooth module (HC-005)
Mobile controlled robot using bluetooth module (HC-005)Mobile controlled robot using bluetooth module (HC-005)
Mobile controlled robot using bluetooth module (HC-005)
 
Android Controlled Arduino Spy Robot
Android Controlled Arduino Spy RobotAndroid Controlled Arduino Spy Robot
Android Controlled Arduino Spy Robot
 
Wireless bomb disposal robot ppt
Wireless bomb disposal robot pptWireless bomb disposal robot ppt
Wireless bomb disposal robot ppt
 
WIRELESS ROBOT PPT
WIRELESS ROBOT PPTWIRELESS ROBOT PPT
WIRELESS ROBOT PPT
 
Cell Phone Operated Robot for Search and Research of an Object
Cell Phone Operated Robot for Search and Research of an ObjectCell Phone Operated Robot for Search and Research of an Object
Cell Phone Operated Robot for Search and Research of an Object
 
WIRELESS ROBOT
WIRELESS ROBOTWIRELESS ROBOT
WIRELESS ROBOT
 
Internet of Things (Iot) Based Robotic Arm
Internet of Things (Iot) Based Robotic ArmInternet of Things (Iot) Based Robotic Arm
Internet of Things (Iot) Based Robotic Arm
 
Wireless Bomb Disposal Robot
Wireless Bomb Disposal RobotWireless Bomb Disposal Robot
Wireless Bomb Disposal Robot
 
Project seminar for group
Project seminar for groupProject seminar for group
Project seminar for group
 
War Field Spying Robot with Night Vision Wireless Camera
War Field Spying Robot with Night Vision Wireless CameraWar Field Spying Robot with Night Vision Wireless Camera
War Field Spying Robot with Night Vision Wireless Camera
 
IRJET - Wireless Military Defense Robot
IRJET -  	  Wireless Military Defense RobotIRJET -  	  Wireless Military Defense Robot
IRJET - Wireless Military Defense Robot
 
Major
MajorMajor
Major
 
Hydromodus- An Autonomous Underwater Vehicle
Hydromodus- An Autonomous Underwater VehicleHydromodus- An Autonomous Underwater Vehicle
Hydromodus- An Autonomous Underwater Vehicle
 

Destacado

Cyberknife medanta
Cyberknife medantaCyberknife medanta
Cyberknife medantaSlidevikram
 
The Challenges of Robotic Design
The Challenges of Robotic DesignThe Challenges of Robotic Design
The Challenges of Robotic DesignDesign World
 
Disruptive Technologies Android - Dongle, Smart Glasses, Sensor Fusion, IOT, ...
Disruptive Technologies Android - Dongle, Smart Glasses, Sensor Fusion, IOT, ...Disruptive Technologies Android - Dongle, Smart Glasses, Sensor Fusion, IOT, ...
Disruptive Technologies Android - Dongle, Smart Glasses, Sensor Fusion, IOT, ...Dan Romescu
 
Usages robots beam lyon1 decembre2015
Usages robots beam lyon1 decembre2015Usages robots beam lyon1 decembre2015
Usages robots beam lyon1 decembre2015Christophe Batier
 
Open Source Event Processing for Sensor Fusion Applications
Open Source Event Processing for Sensor Fusion ApplicationsOpen Source Event Processing for Sensor Fusion Applications
Open Source Event Processing for Sensor Fusion Applicationsguestc4ce526
 
Fusion, Acquisition - Optimisez la migration et la continuité des outils col...
 Fusion, Acquisition - Optimisez la migration et la continuité des outils col... Fusion, Acquisition - Optimisez la migration et la continuité des outils col...
Fusion, Acquisition - Optimisez la migration et la continuité des outils col...Microsoft Technet France
 
Major trends in the digital world : “Fusion” “Share” and “Data"
Major trends in the digital world :  “Fusion” “Share” and “Data"Major trends in the digital world :  “Fusion” “Share” and “Data"
Major trends in the digital world : “Fusion” “Share” and “Data"拓弥 宮田
 
淋巴水腫之物理治療 楊靜蘭
淋巴水腫之物理治療 楊靜蘭淋巴水腫之物理治療 楊靜蘭
淋巴水腫之物理治療 楊靜蘭Kit Leong
 
Lymphoscintigraphy As an Imaging Modality in Lymphatic System
Lymphoscintigraphy As an Imaging Modality in Lymphatic SystemLymphoscintigraphy As an Imaging Modality in Lymphatic System
Lymphoscintigraphy As an Imaging Modality in Lymphatic SystemApollo Hospitals
 
Determining a vascular cause for leg pain and referrals
Determining a vascular cause for leg pain and referralsDetermining a vascular cause for leg pain and referrals
Determining a vascular cause for leg pain and referralsSpecialistVeinHealth
 
Measuring for Lower Extremity Compression Garments
Measuring for Lower Extremity Compression GarmentsMeasuring for Lower Extremity Compression Garments
Measuring for Lower Extremity Compression GarmentsOSUCCC - James
 
Deep Vein Thrombosis
Deep Vein ThrombosisDeep Vein Thrombosis
Deep Vein Thrombosisdbridley
 

Destacado (20)

Cyberknife medanta
Cyberknife medantaCyberknife medanta
Cyberknife medanta
 
The Challenges of Robotic Design
The Challenges of Robotic DesignThe Challenges of Robotic Design
The Challenges of Robotic Design
 
Big Eye At Nits
Big Eye At NitsBig Eye At Nits
Big Eye At Nits
 
Disruptive Technologies Android - Dongle, Smart Glasses, Sensor Fusion, IOT, ...
Disruptive Technologies Android - Dongle, Smart Glasses, Sensor Fusion, IOT, ...Disruptive Technologies Android - Dongle, Smart Glasses, Sensor Fusion, IOT, ...
Disruptive Technologies Android - Dongle, Smart Glasses, Sensor Fusion, IOT, ...
 
Usages robots beam lyon1 decembre2015
Usages robots beam lyon1 decembre2015Usages robots beam lyon1 decembre2015
Usages robots beam lyon1 decembre2015
 
Open Source Event Processing for Sensor Fusion Applications
Open Source Event Processing for Sensor Fusion ApplicationsOpen Source Event Processing for Sensor Fusion Applications
Open Source Event Processing for Sensor Fusion Applications
 
Fusion, Acquisition - Optimisez la migration et la continuité des outils col...
 Fusion, Acquisition - Optimisez la migration et la continuité des outils col... Fusion, Acquisition - Optimisez la migration et la continuité des outils col...
Fusion, Acquisition - Optimisez la migration et la continuité des outils col...
 
Major trends in the digital world : “Fusion” “Share” and “Data"
Major trends in the digital world :  “Fusion” “Share” and “Data"Major trends in the digital world :  “Fusion” “Share” and “Data"
Major trends in the digital world : “Fusion” “Share” and “Data"
 
Robotics
RoboticsRobotics
Robotics
 
Robotics project ppt
Robotics project pptRobotics project ppt
Robotics project ppt
 
淋巴水腫之物理治療 楊靜蘭
淋巴水腫之物理治療 楊靜蘭淋巴水腫之物理治療 楊靜蘭
淋巴水腫之物理治療 楊靜蘭
 
Lymphoscintigraphy As an Imaging Modality in Lymphatic System
Lymphoscintigraphy As an Imaging Modality in Lymphatic SystemLymphoscintigraphy As an Imaging Modality in Lymphatic System
Lymphoscintigraphy As an Imaging Modality in Lymphatic System
 
A3 - Symptom Management
A3 - Symptom ManagementA3 - Symptom Management
A3 - Symptom Management
 
Determining a vascular cause for leg pain and referrals
Determining a vascular cause for leg pain and referralsDetermining a vascular cause for leg pain and referrals
Determining a vascular cause for leg pain and referrals
 
Dvt
DvtDvt
Dvt
 
Measuring for Lower Extremity Compression Garments
Measuring for Lower Extremity Compression GarmentsMeasuring for Lower Extremity Compression Garments
Measuring for Lower Extremity Compression Garments
 
Vascular disorders
Vascular disordersVascular disorders
Vascular disorders
 
Leg Ulcer
Leg UlcerLeg Ulcer
Leg Ulcer
 
DVT
DVTDVT
DVT
 
Deep Vein Thrombosis
Deep Vein ThrombosisDeep Vein Thrombosis
Deep Vein Thrombosis
 

Similar a Wireless Indoor Localization with Dempster-Shafer Simple Support Functions

Surface-Embedded Passive RF Exteroception: Kepler, Greed, and Buffon’s Needle
Surface-Embedded Passive RF Exteroception: Kepler, Greed, and Buffon’s NeedleSurface-Embedded Passive RF Exteroception: Kepler, Greed, and Buffon’s Needle
Surface-Embedded Passive RF Exteroception: Kepler, Greed, and Buffon’s NeedleVladimir Kulyukin
 
DESIGN AND VLSIIMPLEMENTATION OF ANTICOLLISION ENABLED ROBOT PROCESSOR USING ...
DESIGN AND VLSIIMPLEMENTATION OF ANTICOLLISION ENABLED ROBOT PROCESSOR USING ...DESIGN AND VLSIIMPLEMENTATION OF ANTICOLLISION ENABLED ROBOT PROCESSOR USING ...
DESIGN AND VLSIIMPLEMENTATION OF ANTICOLLISION ENABLED ROBOT PROCESSOR USING ...VLSICS Design
 
Location estimation in zig bee network based on fingerprinting
Location estimation in zig bee network based on fingerprintingLocation estimation in zig bee network based on fingerprinting
Location estimation in zig bee network based on fingerprintingHanumesh Palla
 
356 358,tesma411,ijeast
356 358,tesma411,ijeast356 358,tesma411,ijeast
356 358,tesma411,ijeastaissmsblogs
 
A genetic based indoor positioning algorithm using Wi-Fi received signal stre...
A genetic based indoor positioning algorithm using Wi-Fi received signal stre...A genetic based indoor positioning algorithm using Wi-Fi received signal stre...
A genetic based indoor positioning algorithm using Wi-Fi received signal stre...IAESIJAI
 
Enhancing indoor localization using IoT techniques
Enhancing indoor localization using IoT techniquesEnhancing indoor localization using IoT techniques
Enhancing indoor localization using IoT techniquesMohamed Nabil, MSc.
 
The Locator Framework for Detecting Movement Indoors
The Locator Framework for Detecting Movement IndoorsThe Locator Framework for Detecting Movement Indoors
The Locator Framework for Detecting Movement IndoorsTELKOMNIKA JOURNAL
 
A Novel Three-Dimensional Adaptive Localization (T-Dial) Algorithm for Wirele...
A Novel Three-Dimensional Adaptive Localization (T-Dial) Algorithm for Wirele...A Novel Three-Dimensional Adaptive Localization (T-Dial) Algorithm for Wirele...
A Novel Three-Dimensional Adaptive Localization (T-Dial) Algorithm for Wirele...iosrjce
 
Indoor tracking with bluetooth low energy devices using k nearest neighbour a...
Indoor tracking with bluetooth low energy devices using k nearest neighbour a...Indoor tracking with bluetooth low energy devices using k nearest neighbour a...
Indoor tracking with bluetooth low energy devices using k nearest neighbour a...Conference Papers
 
PARTICLE FILTER APPROACH TO UTILIZATION OF WIRELESS SIGNAL STRENGTH FOR MOBIL...
PARTICLE FILTER APPROACH TO UTILIZATION OF WIRELESS SIGNAL STRENGTH FOR MOBIL...PARTICLE FILTER APPROACH TO UTILIZATION OF WIRELESS SIGNAL STRENGTH FOR MOBIL...
PARTICLE FILTER APPROACH TO UTILIZATION OF WIRELESS SIGNAL STRENGTH FOR MOBIL...ijwmn
 
PARTICLE FILTER APPROACH TO UTILIZATION OF WIRELESS SIGNAL STRENGTH FOR MOBIL...
PARTICLE FILTER APPROACH TO UTILIZATION OF WIRELESS SIGNAL STRENGTH FOR MOBIL...PARTICLE FILTER APPROACH TO UTILIZATION OF WIRELESS SIGNAL STRENGTH FOR MOBIL...
PARTICLE FILTER APPROACH TO UTILIZATION OF WIRELESS SIGNAL STRENGTH FOR MOBIL...ijwmn
 
IRJET- Personal Assistant for Visually Impaired People in Malls
IRJET-  	  Personal Assistant for Visually Impaired People in MallsIRJET-  	  Personal Assistant for Visually Impaired People in Malls
IRJET- Personal Assistant for Visually Impaired People in MallsIRJET Journal
 
Ijsartv6 i336124
Ijsartv6 i336124Ijsartv6 i336124
Ijsartv6 i336124aissmsblogs
 
International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)ijceronline
 
Passive Radio Frequency Exteroception in Robot-Assisted Shopping for the Blind
Passive Radio Frequency Exteroception in Robot-Assisted Shopping for the BlindPassive Radio Frequency Exteroception in Robot-Assisted Shopping for the Blind
Passive Radio Frequency Exteroception in Robot-Assisted Shopping for the BlindVladimir Kulyukin
 
Development of real-time indoor human tracking system using LoRa technology
Development of real-time indoor human tracking system using LoRa technology Development of real-time indoor human tracking system using LoRa technology
Development of real-time indoor human tracking system using LoRa technology IJECEIAES
 
IRJET- Survey Paper on Human Following Robot
IRJET- Survey Paper on Human Following RobotIRJET- Survey Paper on Human Following Robot
IRJET- Survey Paper on Human Following RobotIRJET Journal
 

Similar a Wireless Indoor Localization with Dempster-Shafer Simple Support Functions (20)

Rfid based localization
Rfid based localizationRfid based localization
Rfid based localization
 
Surface-Embedded Passive RF Exteroception: Kepler, Greed, and Buffon’s Needle
Surface-Embedded Passive RF Exteroception: Kepler, Greed, and Buffon’s NeedleSurface-Embedded Passive RF Exteroception: Kepler, Greed, and Buffon’s Needle
Surface-Embedded Passive RF Exteroception: Kepler, Greed, and Buffon’s Needle
 
DESIGN AND VLSIIMPLEMENTATION OF ANTICOLLISION ENABLED ROBOT PROCESSOR USING ...
DESIGN AND VLSIIMPLEMENTATION OF ANTICOLLISION ENABLED ROBOT PROCESSOR USING ...DESIGN AND VLSIIMPLEMENTATION OF ANTICOLLISION ENABLED ROBOT PROCESSOR USING ...
DESIGN AND VLSIIMPLEMENTATION OF ANTICOLLISION ENABLED ROBOT PROCESSOR USING ...
 
Location estimation in zig bee network based on fingerprinting
Location estimation in zig bee network based on fingerprintingLocation estimation in zig bee network based on fingerprinting
Location estimation in zig bee network based on fingerprinting
 
356 358,tesma411,ijeast
356 358,tesma411,ijeast356 358,tesma411,ijeast
356 358,tesma411,ijeast
 
A genetic based indoor positioning algorithm using Wi-Fi received signal stre...
A genetic based indoor positioning algorithm using Wi-Fi received signal stre...A genetic based indoor positioning algorithm using Wi-Fi received signal stre...
A genetic based indoor positioning algorithm using Wi-Fi received signal stre...
 
Enhancing indoor localization using IoT techniques
Enhancing indoor localization using IoT techniquesEnhancing indoor localization using IoT techniques
Enhancing indoor localization using IoT techniques
 
The Locator Framework for Detecting Movement Indoors
The Locator Framework for Detecting Movement IndoorsThe Locator Framework for Detecting Movement Indoors
The Locator Framework for Detecting Movement Indoors
 
J017345864
J017345864J017345864
J017345864
 
A Novel Three-Dimensional Adaptive Localization (T-Dial) Algorithm for Wirele...
A Novel Three-Dimensional Adaptive Localization (T-Dial) Algorithm for Wirele...A Novel Three-Dimensional Adaptive Localization (T-Dial) Algorithm for Wirele...
A Novel Three-Dimensional Adaptive Localization (T-Dial) Algorithm for Wirele...
 
Indoor tracking with bluetooth low energy devices using k nearest neighbour a...
Indoor tracking with bluetooth low energy devices using k nearest neighbour a...Indoor tracking with bluetooth low energy devices using k nearest neighbour a...
Indoor tracking with bluetooth low energy devices using k nearest neighbour a...
 
PARTICLE FILTER APPROACH TO UTILIZATION OF WIRELESS SIGNAL STRENGTH FOR MOBIL...
PARTICLE FILTER APPROACH TO UTILIZATION OF WIRELESS SIGNAL STRENGTH FOR MOBIL...PARTICLE FILTER APPROACH TO UTILIZATION OF WIRELESS SIGNAL STRENGTH FOR MOBIL...
PARTICLE FILTER APPROACH TO UTILIZATION OF WIRELESS SIGNAL STRENGTH FOR MOBIL...
 
PARTICLE FILTER APPROACH TO UTILIZATION OF WIRELESS SIGNAL STRENGTH FOR MOBIL...
PARTICLE FILTER APPROACH TO UTILIZATION OF WIRELESS SIGNAL STRENGTH FOR MOBIL...PARTICLE FILTER APPROACH TO UTILIZATION OF WIRELESS SIGNAL STRENGTH FOR MOBIL...
PARTICLE FILTER APPROACH TO UTILIZATION OF WIRELESS SIGNAL STRENGTH FOR MOBIL...
 
IRJET- Personal Assistant for Visually Impaired People in Malls
IRJET-  	  Personal Assistant for Visually Impaired People in MallsIRJET-  	  Personal Assistant for Visually Impaired People in Malls
IRJET- Personal Assistant for Visually Impaired People in Malls
 
Ijsartv6 i336124
Ijsartv6 i336124Ijsartv6 i336124
Ijsartv6 i336124
 
X25119123
X25119123X25119123
X25119123
 
International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)
 
Passive Radio Frequency Exteroception in Robot-Assisted Shopping for the Blind
Passive Radio Frequency Exteroception in Robot-Assisted Shopping for the BlindPassive Radio Frequency Exteroception in Robot-Assisted Shopping for the Blind
Passive Radio Frequency Exteroception in Robot-Assisted Shopping for the Blind
 
Development of real-time indoor human tracking system using LoRa technology
Development of real-time indoor human tracking system using LoRa technology Development of real-time indoor human tracking system using LoRa technology
Development of real-time indoor human tracking system using LoRa technology
 
IRJET- Survey Paper on Human Following Robot
IRJET- Survey Paper on Human Following RobotIRJET- Survey Paper on Human Following Robot
IRJET- Survey Paper on Human Following Robot
 

Más de Vladimir Kulyukin

Toward Sustainable Electronic Beehive Monitoring: Algorithms for Omnidirectio...
Toward Sustainable Electronic Beehive Monitoring: Algorithms for Omnidirectio...Toward Sustainable Electronic Beehive Monitoring: Algorithms for Omnidirectio...
Toward Sustainable Electronic Beehive Monitoring: Algorithms for Omnidirectio...Vladimir Kulyukin
 
Digitizing Buzzing Signals into A440 Piano Note Sequences and Estimating Fora...
Digitizing Buzzing Signals into A440 Piano Note Sequences and Estimating Fora...Digitizing Buzzing Signals into A440 Piano Note Sequences and Estimating Fora...
Digitizing Buzzing Signals into A440 Piano Note Sequences and Estimating Fora...Vladimir Kulyukin
 
Generalized Hamming Distance
Generalized Hamming DistanceGeneralized Hamming Distance
Generalized Hamming DistanceVladimir Kulyukin
 
Adapting Measures of Clumping Strength to Assess Term-Term Similarity
Adapting Measures of Clumping Strength to Assess Term-Term SimilarityAdapting Measures of Clumping Strength to Assess Term-Term Similarity
Adapting Measures of Clumping Strength to Assess Term-Term SimilarityVladimir Kulyukin
 
A Cloud-Based Infrastructure for Caloric Intake Estimation from Pre-Meal Vide...
A Cloud-Based Infrastructure for Caloric Intake Estimation from Pre-Meal Vide...A Cloud-Based Infrastructure for Caloric Intake Estimation from Pre-Meal Vide...
A Cloud-Based Infrastructure for Caloric Intake Estimation from Pre-Meal Vide...Vladimir Kulyukin
 
Exploring Finite State Automata with Junun Robots: A Case Study in Computabil...
Exploring Finite State Automata with Junun Robots: A Case Study in Computabil...Exploring Finite State Automata with Junun Robots: A Case Study in Computabil...
Exploring Finite State Automata with Junun Robots: A Case Study in Computabil...Vladimir Kulyukin
 
Image Blur Detection with 2D Haar Wavelet Transform and Its Effect on Skewed ...
Image Blur Detection with 2D Haar Wavelet Transform and Its Effect on Skewed ...Image Blur Detection with 2D Haar Wavelet Transform and Its Effect on Skewed ...
Image Blur Detection with 2D Haar Wavelet Transform and Its Effect on Skewed ...Vladimir Kulyukin
 
Text Skew Angle Detection in Vision-Based Scanning of Nutrition Labels
Text Skew Angle Detection in Vision-Based Scanning of Nutrition LabelsText Skew Angle Detection in Vision-Based Scanning of Nutrition Labels
Text Skew Angle Detection in Vision-Based Scanning of Nutrition LabelsVladimir Kulyukin
 
Vision-Based Localization and Scanning of 1D UPC and EAN Barcodes with Relaxe...
Vision-Based Localization and Scanning of 1D UPC and EAN Barcodes with Relaxe...Vision-Based Localization and Scanning of 1D UPC and EAN Barcodes with Relaxe...
Vision-Based Localization and Scanning of 1D UPC and EAN Barcodes with Relaxe...Vladimir Kulyukin
 
Effective Nutrition Label Use on Smartphones
Effective Nutrition Label Use on SmartphonesEffective Nutrition Label Use on Smartphones
Effective Nutrition Label Use on SmartphonesVladimir Kulyukin
 
An Algorithm for Mobile Vision-Based Localization of Skewed Nutrition Labels ...
An Algorithm for Mobile Vision-Based Localization of Skewed Nutrition Labels ...An Algorithm for Mobile Vision-Based Localization of Skewed Nutrition Labels ...
An Algorithm for Mobile Vision-Based Localization of Skewed Nutrition Labels ...Vladimir Kulyukin
 
An Algorithm for In-Place Vision-Based Skewed 1D Barcode Scanning in the Cloud
An Algorithm for In-Place Vision-Based Skewed 1D Barcode Scanning in the CloudAn Algorithm for In-Place Vision-Based Skewed 1D Barcode Scanning in the Cloud
An Algorithm for In-Place Vision-Based Skewed 1D Barcode Scanning in the CloudVladimir Kulyukin
 
Narrative Map Augmentation with Automated Landmark Extraction and Path Inference
Narrative Map Augmentation with Automated Landmark Extraction and Path InferenceNarrative Map Augmentation with Automated Landmark Extraction and Path Inference
Narrative Map Augmentation with Automated Landmark Extraction and Path InferenceVladimir Kulyukin
 
Skip Trie Matching: A Greedy Algorithm for Real-Time OCR Error Correction on ...
Skip Trie Matching: A Greedy Algorithm for Real-Time OCR Error Correction on ...Skip Trie Matching: A Greedy Algorithm for Real-Time OCR Error Correction on ...
Skip Trie Matching: A Greedy Algorithm for Real-Time OCR Error Correction on ...Vladimir Kulyukin
 
Vision-Based Localization & Text Chunking of Nutrition Fact Tables on Android...
Vision-Based Localization & Text Chunking of Nutrition Fact Tables on Android...Vision-Based Localization & Text Chunking of Nutrition Fact Tables on Android...
Vision-Based Localization & Text Chunking of Nutrition Fact Tables on Android...Vladimir Kulyukin
 
Skip Trie Matching for Real Time OCR Output Error Correction on Android Smart...
Skip Trie Matching for Real Time OCR Output Error Correction on Android Smart...Skip Trie Matching for Real Time OCR Output Error Correction on Android Smart...
Skip Trie Matching for Real Time OCR Output Error Correction on Android Smart...Vladimir Kulyukin
 
Vision-Based Localization & Text Chunking of Nutrition Fact Tables on Android...
Vision-Based Localization & Text Chunking of Nutrition Fact Tables on Android...Vision-Based Localization & Text Chunking of Nutrition Fact Tables on Android...
Vision-Based Localization & Text Chunking of Nutrition Fact Tables on Android...Vladimir Kulyukin
 
Toward Blind Travel Support through Verbal Route Directions: A Path Inference...
Toward Blind Travel Support through Verbal Route Directions: A Path Inference...Toward Blind Travel Support through Verbal Route Directions: A Path Inference...
Toward Blind Travel Support through Verbal Route Directions: A Path Inference...Vladimir Kulyukin
 
Eye-Free Barcode Detection on Smartphones with Niblack's Binarization and Sup...
Eye-Free Barcode Detection on Smartphones with Niblack's Binarization and Sup...Eye-Free Barcode Detection on Smartphones with Niblack's Binarization and Sup...
Eye-Free Barcode Detection on Smartphones with Niblack's Binarization and Sup...Vladimir Kulyukin
 
Eyesight Sharing in Blind Grocery Shopping: Remote P2P Caregiving through Clo...
Eyesight Sharing in Blind Grocery Shopping: Remote P2P Caregiving through Clo...Eyesight Sharing in Blind Grocery Shopping: Remote P2P Caregiving through Clo...
Eyesight Sharing in Blind Grocery Shopping: Remote P2P Caregiving through Clo...Vladimir Kulyukin
 

Más de Vladimir Kulyukin (20)

Toward Sustainable Electronic Beehive Monitoring: Algorithms for Omnidirectio...
Toward Sustainable Electronic Beehive Monitoring: Algorithms for Omnidirectio...Toward Sustainable Electronic Beehive Monitoring: Algorithms for Omnidirectio...
Toward Sustainable Electronic Beehive Monitoring: Algorithms for Omnidirectio...
 
Digitizing Buzzing Signals into A440 Piano Note Sequences and Estimating Fora...
Digitizing Buzzing Signals into A440 Piano Note Sequences and Estimating Fora...Digitizing Buzzing Signals into A440 Piano Note Sequences and Estimating Fora...
Digitizing Buzzing Signals into A440 Piano Note Sequences and Estimating Fora...
 
Generalized Hamming Distance
Generalized Hamming DistanceGeneralized Hamming Distance
Generalized Hamming Distance
 
Adapting Measures of Clumping Strength to Assess Term-Term Similarity
Adapting Measures of Clumping Strength to Assess Term-Term SimilarityAdapting Measures of Clumping Strength to Assess Term-Term Similarity
Adapting Measures of Clumping Strength to Assess Term-Term Similarity
 
A Cloud-Based Infrastructure for Caloric Intake Estimation from Pre-Meal Vide...
A Cloud-Based Infrastructure for Caloric Intake Estimation from Pre-Meal Vide...A Cloud-Based Infrastructure for Caloric Intake Estimation from Pre-Meal Vide...
A Cloud-Based Infrastructure for Caloric Intake Estimation from Pre-Meal Vide...
 
Exploring Finite State Automata with Junun Robots: A Case Study in Computabil...
Exploring Finite State Automata with Junun Robots: A Case Study in Computabil...Exploring Finite State Automata with Junun Robots: A Case Study in Computabil...
Exploring Finite State Automata with Junun Robots: A Case Study in Computabil...
 
Image Blur Detection with 2D Haar Wavelet Transform and Its Effect on Skewed ...
Image Blur Detection with 2D Haar Wavelet Transform and Its Effect on Skewed ...Image Blur Detection with 2D Haar Wavelet Transform and Its Effect on Skewed ...
Image Blur Detection with 2D Haar Wavelet Transform and Its Effect on Skewed ...
 
Text Skew Angle Detection in Vision-Based Scanning of Nutrition Labels
Text Skew Angle Detection in Vision-Based Scanning of Nutrition LabelsText Skew Angle Detection in Vision-Based Scanning of Nutrition Labels
Text Skew Angle Detection in Vision-Based Scanning of Nutrition Labels
 
Vision-Based Localization and Scanning of 1D UPC and EAN Barcodes with Relaxe...
Vision-Based Localization and Scanning of 1D UPC and EAN Barcodes with Relaxe...Vision-Based Localization and Scanning of 1D UPC and EAN Barcodes with Relaxe...
Vision-Based Localization and Scanning of 1D UPC and EAN Barcodes with Relaxe...
 
Effective Nutrition Label Use on Smartphones
Effective Nutrition Label Use on SmartphonesEffective Nutrition Label Use on Smartphones
Effective Nutrition Label Use on Smartphones
 
An Algorithm for Mobile Vision-Based Localization of Skewed Nutrition Labels ...
An Algorithm for Mobile Vision-Based Localization of Skewed Nutrition Labels ...An Algorithm for Mobile Vision-Based Localization of Skewed Nutrition Labels ...
An Algorithm for Mobile Vision-Based Localization of Skewed Nutrition Labels ...
 
An Algorithm for In-Place Vision-Based Skewed 1D Barcode Scanning in the Cloud
An Algorithm for In-Place Vision-Based Skewed 1D Barcode Scanning in the CloudAn Algorithm for In-Place Vision-Based Skewed 1D Barcode Scanning in the Cloud
An Algorithm for In-Place Vision-Based Skewed 1D Barcode Scanning in the Cloud
 
Narrative Map Augmentation with Automated Landmark Extraction and Path Inference
Narrative Map Augmentation with Automated Landmark Extraction and Path InferenceNarrative Map Augmentation with Automated Landmark Extraction and Path Inference
Narrative Map Augmentation with Automated Landmark Extraction and Path Inference
 
Skip Trie Matching: A Greedy Algorithm for Real-Time OCR Error Correction on ...
Skip Trie Matching: A Greedy Algorithm for Real-Time OCR Error Correction on ...Skip Trie Matching: A Greedy Algorithm for Real-Time OCR Error Correction on ...
Skip Trie Matching: A Greedy Algorithm for Real-Time OCR Error Correction on ...
 
Vision-Based Localization & Text Chunking of Nutrition Fact Tables on Android...
Vision-Based Localization & Text Chunking of Nutrition Fact Tables on Android...Vision-Based Localization & Text Chunking of Nutrition Fact Tables on Android...
Vision-Based Localization & Text Chunking of Nutrition Fact Tables on Android...
 
Skip Trie Matching for Real Time OCR Output Error Correction on Android Smart...
Skip Trie Matching for Real Time OCR Output Error Correction on Android Smart...Skip Trie Matching for Real Time OCR Output Error Correction on Android Smart...
Skip Trie Matching for Real Time OCR Output Error Correction on Android Smart...
 
Vision-Based Localization & Text Chunking of Nutrition Fact Tables on Android...
Vision-Based Localization & Text Chunking of Nutrition Fact Tables on Android...Vision-Based Localization & Text Chunking of Nutrition Fact Tables on Android...
Vision-Based Localization & Text Chunking of Nutrition Fact Tables on Android...
 
Toward Blind Travel Support through Verbal Route Directions: A Path Inference...
Toward Blind Travel Support through Verbal Route Directions: A Path Inference...Toward Blind Travel Support through Verbal Route Directions: A Path Inference...
Toward Blind Travel Support through Verbal Route Directions: A Path Inference...
 
Eye-Free Barcode Detection on Smartphones with Niblack's Binarization and Sup...
Eye-Free Barcode Detection on Smartphones with Niblack's Binarization and Sup...Eye-Free Barcode Detection on Smartphones with Niblack's Binarization and Sup...
Eye-Free Barcode Detection on Smartphones with Niblack's Binarization and Sup...
 
Eyesight Sharing in Blind Grocery Shopping: Remote P2P Caregiving through Clo...
Eyesight Sharing in Blind Grocery Shopping: Remote P2P Caregiving through Clo...Eyesight Sharing in Blind Grocery Shopping: Remote P2P Caregiving through Clo...
Eyesight Sharing in Blind Grocery Shopping: Remote P2P Caregiving through Clo...
 

Último

TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Enterprise Knowledge
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostZilliz
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DayH2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DaySri Ambati
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Manik S Magar
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piececharlottematthew16
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxLoriGlavin3
 

Último (20)

TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DayH2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piece
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
 

Wireless Indoor Localization with Dempster-Shafer Simple Support Functions

  • 1. Wireless Indoor Localization with Dempster-Shafer Simple Support Functions∗ Vladimir Kulyukin Amit Banavalikar John Nicholson Computer Science Assistive Technology Laboratory Department of Computer Science Utah State University Logan, Utah, U.S.A {vladimir.kulyukin}@usu.edu Abstract— A mobile robot is localized in an indoor envi- ronment using IEEE 802.11b wireless signals. Simple support functions of the Dempster-Shafer theory are used to combine ev- idence from multiple localization algorithms. Emperical results are presented and discussed. Conclusions are drawn regarding when the proposed sensor fusion methods may improve perfor- mance and when they may not. Index Terms— localization, sensor fusion, Dempster-Shafer theory I. I NTRODUCTION In May 2003, the Assistive Technology Laboratory of the Department of Computer Science (CS) of Utah State Univeristy (USU) and the USU Center for Persons with Disabilities (CPD) started a collaborative project whose ob- jective is to build an indoor robotic guide for the visually impaired in dynamic and complex indoor environments, such as grocery stores and airports. A proof-of-concept prototype has been deployed in two indoor environments: the USU CS Department and the USU CPD. The guide’s name is RG, which stands for “robotic guide.” A. RFID-based localization Fig. 1. RG: A Robotic guide for the visually impaired. RG, shown in Fig. 1, is built on top of the Pioneer 2DX commercial robotic platform from the ActivMedia Corpo- ration (See Fig. 1). What turns the platform into a robotic objects in the environment or worn on clothing. They do not guide is a Wayfinding Toolkit (WT) mounted on top of the require any external power source or direct line of sight to platform and powered from the on-board batteries. As can be detected by the RFID reader. The tags are activated by be seen in Fig. 1, the WT resides in a polyvinyl chloride the spherical electromagnetic field generated by the RFID (PVC) pipe structure and includes a Dell T M Ultralight X300 antenna with a radius of approximately 1.5 meters. Each tag laptop connected to the platform’s microcontroller, a laser is programmatically assigned a unique ID. range finder from SICK, Inc., and to a radio-frequency RFID tags are viewed as stimuli that trigger or dis- identification (RFID) reader. The TI Series 2000 RFID reader able specific behaviors, e.g., follow-wall, turn-left, turn-right, is connected to a square 200mm × 200mm antenna. The avoid-obstacle, make-u-turn, etc. The robot’s knowledge base upper left part of Fig. 1 depicts a TI RFID Slim Disk consists of a connectivity graph of the environment, tag to tag attached to a wall. These tags can be attached to any destination mappings, and behavior trigger/disable scripts as- sociated with specific tags. Each node of the graph represents ∗ This work is supported, in part, by NSF Grant IIS-0346880 and, in part, a location marked with a tag. The robot’s location with by two Community University Research Initiative (CURI) grants (CURI 2003 and CURI 2004) from the State of Utah. Copyright c 2005 USU respect to the graph is updated as soon as RG detects a tag. Computer Science Assistive Technology Laboratory (CSATL). During experimental runs described elsewhere [6], the
  • 2. RFID tags were successfully detected with the exception of been much debated in the literature [17], [21], [5]. Attempts three runs in crowded environments. During these runs, the were made to reduce DST to the fundamental axioms of robot missed a total of five RFID tags, because it had to classical probability theory [10]. However, belief functions, navigate around groups of people standing near the tags. The a fundamental concept underlying DST, were shown not to detection failures happened when the tags were outside of be probability distributions over sample spaces [13]. the effective range of the robot’s RFID antenna. The robot DST was chosen for three reasons. First, in DST, it successfully navigated around each group of people using its is unnecessary to have precise a priori probabilities. This obstacle avoidance routines. However, the obstacle avoidance was considered an advantage, because the propagation of maneuver would put a blocked tag outside of the RFID wireless signals indoors is affected by dead spots, noise, antenna’s electromagnetic sphere, which caused the robot to and interference. Second, Laplace’s Principle of Insufficient miss an important maneuver, e.g., turn in the right direction Reason, i.e., a uniform distribution of equal probability to all or make a u-turn [8]. Consequently, the robot would become points in the unknown sample space, is not imposed and, lost and would have to stop and re-plan its path after detecting as a consequence, there is no axiom of additivity. Third, that it had become lost. DST evidence combination rules have terms indicating when multiple observations disagree. B. Wireless localization To overcome RFID detection failures in crowded environ- C. Related work ments, it was decided to supplement RFID-based localization The research presented in this paper contributes to the body with wireless localization. The working hypothesis was that of work on indoor localization done by assistive technology indoor localization can be done by using wireless signals and robotics researchers. Ladd et al. [12] used Bayesian already available in many indoor environments due to the reasoning combined with Hidden Markov Models (HMMs) ubiquitous use of wireless Wi-Fi (IEEE 802.11b) Ethernet to determine the orientation and position of a person using networks. One advantage of this approach is that it does not wireless 802.11b signals. The person wore a laptop with a require any modification of the environment, e.g., deployment wireless card and was tracked in an indoor environment. The of extra sensors or chips, which may disrupt routine activities assumption was made that people were minimally present in of organizations and expose the robot to potential vandalism. the environment. It should be noted that wireless localization is similar to Serrano [19] uses IEEE 802.11b wireless network signals RFID-based localization in that it localizes the robot to a to determine the position of a robot inside a building. The location. No attempt is made to determine the robot’s pose conducted experiments show that wireless indoor localization (x, y, θ). In keeping with the principles of the Spatial Seman- may not be possible without a preconstructed sensor signal tic Hierarchy [9] on which RG’s knowledge representation is map. However, if a motion model is available, Markov based, once the robot is localized to a location, the location localization techniques can be used to localize the robot specific behavior scripts are triggered to achieve a global accurately. Howard et al. [4] also investigated the use of navigation objective [7]. Markov localization techniques in wireless robot localization. Kismet, an open source wireless network analyzer, was Talking SignsT M is an infrared localization technology used to detect and digitize wireless signal strengths. The developed at the Smith-Kettlewell Eye Research Institute in software runs on the robot’s Dell T M Ultralight X300 laptop San Francisco [2]. The system is based on infrared sensors equipped with the Orinoco T M Classic Gold PC 802.11b card. and operates like the infrared remote control device for D-LinkT M 802.11b/2.4GHz wireless access routers were television channel selection. Infrared beams carry speech used as access points, i.e., signal sources. A set of locations signals embedded in various signs to hand-held receivers that is selected in a target environment. The wireless signature of speak those signals to users. Marston and Golledge [14] used each location consists of a vector of signal strengths from Talking SignsT M in their Remote Infrared Audible Signage each access point detected at that landmark. At run time, (RIAS) system. RIAS was installed at the San Francisco signal strengths are classified to a location. CalTrain station to conduct several field tests with legally While much effort has been put into modelling wireless blind individuals. radio signals, no single consistent model exists that can The BAT system is an indoor localization system de- reliably describe the behavior of wireless signals indoors[12]. veloped at the AT&T Cambridge Research Laboratory [1]. Consequently, it was decided to use sensor fusion to localize The system uses ultrasonic sensors that are placed on the the robot. Sensor fusion is a post-processing technique that ceiling to increase coverage and obtain sufficient accuracy. combines and refines initial sensor readings. The Dempster- The receiver detects ultrasonic signals and uses triangula- Shafer theory (DST) of evidence [20] was chosen as a tion to position itself. The Atlanta Veterans Administration theoretical framework for sensor fusion. The relative advan- (VA) R&D Center proposed the concept of Talking Braille tages and disadvantages of DST and Bayesian theory have infrastructure [18]. Talking Braille is a method for providing
  • 3. access to Braille/Raised Letter (BRL) signage at a distance. as its subset. Formally, a simple support function S : 2 Θ → Talking Braille is an adaptation of electronic infrared badge [0, 1], A = , A ∈ Θ, is defined as S(B) = 0, if ¬(A ⊆ B); technology developed by Charmed Technologies, Inc. The S(B) = s, 0 ≤ s ≤ 1, if A ⊆ B, and B = Θ; S(B) = 1, infrastructure consists of small digital circuits embedded in if B = Θ. If S is focused on A, S’s BPAs are defined as standard BRL signs. Small badges worn by users remotely follows: m(A) = S(A); m(Θ) = 1 − S(A); m(B) = 0, trigger signs in the user’s vicinity. Using buttons on the B = A and B ∈ Θ. A separable support function is the badge, the user requests that signs either voice their message orthogonal sum of two or more simple support functions. or transmit their message to the user’s device over an infrared Simple support functions can be homogeneous or hetero- beam. geneous. Homogeneous simple support functions focus on As regards sensor fusion, the research presented here the same subset of Θ, whereas heterogeneous simple support contributes to the body of work done by robotics researchers functions focus on different subsets of Θ. who used DST to fuse information from multiple robotic Let S1 and S2 be two simple support functions focused on sensors. In particular, Murphy [16] used DST as a framework A so that S1 (A) = s1 and S2 (A) = s2 . It can be shown that for the Sensor Fusion Effects (SFX) architecture. In the SFX, the BPA m corresponding to S 1 ⊕ S2 is defined as follows: the robot’s execution activities used DST beliefs generated m(A) = 1 − (1 − s1 )(1 − s2 ) and m(Θ) = (1 − s1 )(1 − s2 ). from a percept to either proceed with a task, terminate the If S1 is focused on A and S 2 is focused on B = A, then task, or conduct more sensing. Other robotics researchers also it can be shown that the BPA m corresponding to S 1 ⊕ S2 used DST for sensor fusion [3]. depends on whether A ∩ B = . If A ∩ B = , m(A) = The remainder of this paper is organized as follows. s1 (1 − s2 ); m(A ∩ B) = s1 s2 ; m(B) = s2 (1 − s1 ); and First, a brief review of the salient aspects of DST is given. m(Θ) = (1 − s1 )(1 − s2 ), which gives rise to the following Second, the details of the proposed approach to wireless support function: indoor localization are presented. Third, the results of the ⎧ experiments are discussed. ⎪ 0 ⎪ ⎪ ⎪ s1 s2 ⎪ ⎪ II. D EMPSTER -S HAFER T HEORY ⎨ s1 S(C) = (2) In DST, knowledge about the world is represented as a ⎪ s2 ⎪ ⎪ ⎪ 1 − (1 − s1 )(1 − s2 ) set of elements, Θ, called the frame of discernment (FOD). ⎪ ⎪ ⎩ Each element of Θ corresponds to a proposition. For example, 1 . Θ = {θ1 , θ2 } can be a FOD for a coin tossing experiment The first case arises when ¬(A ∩ B ⊆ C); the second case so that θ1 is heads and θ2 is tails. Each subset of Θ can be arises when A ∩ B ⊆ C ∧ ¬(A ⊆ C) ∧ ¬(B ⊆ C); the assigned a number, called its basic probability number, that third case arises when A ⊆ C ∧ ¬(B ⊆ C); the fourth case describes the amount of belief apportioned to it by a reasoner. arises when B ⊆ C ∧ ¬(A ⊆ C); the fifth case arises when The assignment of basic probability numbers is governed A ⊆ C, B ⊆ C∧ = Θ; the sixth case arises when C = Θ. by a basic probability assignment (BPA) m : 2 Θ → [0, 1] so If A ∩ B = , S1 ⊕ S2 has the following BPA: m(A) = that m( ) = 0 and ΣA⊆Θ m(A) = 1. Each BPA describes a s1 (1 − s2 )/(1 − s1 s2 ); m(B) = s2 (1 − s1 )/(1 − s1 s2 ); belief function over Θ. A subset A of Θ is a focal point of a m(Θ) = (1 − s1 )(1 − s2 )/(1 − s1 s2 ), which corresponds belief function Bel if m(A) > 0. Suppose that m 1 and m2 to the following support function: are two BPAs for two belief functions Bel 1 and Bel2 over Θ, respectively. Let A1 , A2 , ..., Ak , k > 0 be the focal points of ⎧ Bel1 and B1 , B2 , ..., Bn , n > 0 be the focal points of Bel 2 . ⎪ 0 ⎪ ⎪ ⎪ s1 (1 − s1 )/(1 − s1 s2 ) Then Bel1 and Bel2 can be combined through the orthogonal ⎨ sum Bel1 ⊕ Bel2 whose BPA is defined as follows: S(C) = s2 (1 − s1 )/(1 − s1 s2 ) (3) ⎪ ⎪ (s1 (1 − s2 ) + s2 (1 − s1 ))/(1 − s1 s2 ) ⎪ ⎪ ⎩ ΣAi ∩Bj =A m1 (Ai )m2 (Bj ) 1 m(A) = (1) 1 − ΣAi ∩Bj = m1 (Ai )m2 (Bj ) The first case arises when ¬(A ⊆ C) ∧ ¬(B ⊆ C); the Once the pairwise rule is defined, one can orthogonally second case arises when A ⊆ C ∧ ¬(B ⊆ C); the third case sum several belief functions. A fundamental result of the DST arises when B ⊆ C ∧ ¬(A ⊆ C); the fourth case arises when is that the order of the individual pairwise sums has no impact A ⊆ C ∧ B ⊆ C ∧ C = Θ; the fifth case arises when C = Θ. on the overall result [20]. A simple support function S provides evidential support III. W IRELESS L OCALIZATION for one specific subset A of Θ. S is said to be focused on The target environment for localization experiments was A. The function provides no evidential support for any other the USU CS Department. The department occupies an indoor subset of Θ unless that set is implied by A, i.e., contains A area of approximately 6,590 square meters. The floor contains
  • 4. example, if a hall’s orientation was from north to south, two sets of samples were collected: one facing north, the other facing south. A set of samples consisted of two minutes worth of data. An individual sample was a set of five wireless signal strengths, one from each wireless access point in the department. Samples were collected at a rate of approximately one sample every ten microseconds. Different sets of data for a single collection position were collected on different days in order to see a wider variety of signal strength patterns. Each collection position and direction combination had 10 total sets of data, which amounted to a total of twenty minutes worth of data. Therefore, the total data collection Fig. 2. Wi-Fi access points at the USU CS Department. time was 260 minutes, which resulted in a total of 1,553,428 samples. These samples were used for training purposes. To obtain the validation data, RG was made to navigate the route that contained all the selected locations 5 times in each direction. Four pieces of masking tape were placed at each collection position: two at 0.5 meter from the collection position and two at 1 meter from the collection position. The pieces of tape marked the proximity to the collection position, i.e., the robot is within 0.5 meter of the collection position and the robot is within 1 meter of the collection position. As the robot crossed a tape, a human operator following the robot Fig. 3. Data collection at a location. would press a key on a wearable keypad to mark this event electronically. Thus, in the validation file, the readings at each position were marked with the proximity to that position. 23 offices, 7 laboratories, a conference room, a student Unlike in the wireless localization experiments conducted lounge, a tutor room, two elevators, several bathrooms, and by Ladd et al. [12], people were present in the environment two staircases. during the robot runs. Five wireless access points were deployed at various A. Localization algorithms offices in the USU CS Department. The offices are shown The following algorithms were used for localization: in Fig. 2 with black circles. The offices were selected on Bayesian, C4.5, and an artificial neural network (ANN) [15]. the basis of their availability. No other strategy was used The Bayesian algorithm considered the access points to be for choosing the offices. Five locations were then selected. independent of each other. At each location, the priors were Each location was at a corner. Corners were selected because acquired for the probabilities of specific signal strengths in indoor environments they are very useful decision points. from each sensor at that location, i.e., P (s i |L), where si In Fig. 2, the locations are shown as circles with crosses. is the signal strength from the i-th sensor at location L. Each location had several (two or more) collection positions At run time, the standard Bayes rule was used to classify marked. A collection position was the actual place where received signal strengths with respect to a specific location. wireless signal strengths were collected. Each collection posi- The C4.5 algorithm inductively constructed a decision tree tion was located 1.5 meters away from a corner. Fig. 3 shows for classifying the signal strengths into five locations. One how wireless signal strength data were collected at a hall backpropagation ANN was trained for each location. Each corner. The bullets represent three collection positions. The ANN had 5 input nodes, i.e., 1 node for each access point, width of the hall determined how many collection positions 2 hidden layers of 10 nodes each, and 1 output node. At were needed. If the hall was narrow (width < 2 meters), run time, the outputs from each ANN were taken and the only one collection position was chosen in the middle of the final classification was decided by the activation levels of hall. If the hall was wider than 2 meters, then there were the output nodes of the individual ANNs. The winner ANN two collection positions, which were positioned to divide the determined the result location. hall width into thirds. A total of 13 collection positions was chosen for the five selected locations. Thus, each location B. Two evidence combination algorithms corresponded to at least two collection positions. Evidence from the algorithms was combined as follows. Two sets of samples were taken at each collection position, Let Θ = {L1 , L2 , L3 , L4 , L5 }, where Li , 1 ≤ i ≤ 5, one for each direction of the hall’s orientation. So, for corresponds to the proposition that the robot is at location
  • 5. Algorithm Position Li . Let X be a vector of wireless signal strength readings 1 2 3 4 5 such that X = [s1 , s2 , s3 , s4 , s5 ], where 0 ≤ si ≤ 130. BAY 0.98 0.95 0.79 0.65 0.91 Let A be a localization algorithm such that X is its input C45 0.94 0.95 0.77 0.67 0.95 ANN 0.98 0.94 0.81 0.72 0.88 so that A(X) ∈ Θ, i.e., the output of A is a possibly DST1 1.00 0.97 0.84 0.84 0.99 empty set of locations. Let T be the target location, i.e., the DST2 1.00 0.98 0.79 0.67 0.99 current location of the robot. Let all available algorithms be enumerated as A1 , ..., An , n > 0. TABLE I The performance of each localization algorithm at L i can TABLE I: PPV AT 0.5 METER . Ai be represented as a simple support function S B={Li } , where B = {Li } is the focus of S and A i is a localization algorithm. Algorithm Position For example, if there are five locations and three localization 1 2 3 4 5 algorithms, there are fifteen simple support functions: one BAY 0.93 0.91 0.82 0.68 0.91 simple support function for each location and each localiza- C45 0.87 0.91 0.78 0.64 0.95 ANN 0.92 0.93 0.82 0.67 0.89 tion algorithm. DST1 0.72 0.89 0.82 0.80 0.99 At run time, given X, A j (X) is computed for each L i DST2 0.91 0.90 0.81 0.68 0.97 and for each localization algorithm A j . If Aj (X) is greater Aj TABLE II than the threshold, S {Li } ({Li }) = sij , where sij is the basic Aj TABLE II: PPV AT 1.0 METER . probability number with which S {Li } supports its focus. Oth- Aj erwise, S{Li } ({Li }) = 0. The support for L i is computed as A1 Aj S{Li } ⊕...⊕S{Li} . After such orthogonal sums are computed Let T P , T N , F P , and F N be the number of true for each location, the location whose orthogonal sum gives it positives, true negatives, false positives, and false negatives, the greatest support is selected. This method of combination respectively. Using T P , T N , F P , and F N , one can define is called homogeneous insomuch as the orthogonal sums are four evaluation statistics: sensitivity, specificity, positive pre- computed of simple support functions with the same focus. dictive value (PPV), negative predictive value (NPV) [11]. There is another possibility of evidence combination. From Sensitivity, T P/(T P + F N ), estimates the probability of A preliminary tests it is possible to find the best localization saying that the signal receiver is at location L given that algorithm for each location according to some criterion C. the signal receiver is at location L, i.e., P [A(X) = L|T = Suppose that A1 , ..., An are the best localization algorithms L]. Specificity, defined as T N/(T N + F P ), estimates the for each of the n locations. Note that the same algorithm probability of A saying that the signal receiver is not at L can be best for several locations. Suppose further that given that the signal receiver is not at L, i.e., P [A(X) = these algorithms are represented as simple support function L|T = L]. PPV, defined as T P/(T P + F P ), estimates the S{L1 } , ..., S{Ln } . Given X, Ai (X) is computed for each L i , probability that the receiver is at L given that A says that the where Ai (X) is the output of the best algorithm for L i . If receiver is at L, i.e., P [T = L|A(X) = L]. Finally, NPV, Ai (X) is greater than some threshold, S {Li } ({Li }) = si . defined as TN/(TN + FN), estimates the probability that the Once each of the n support degrees are computed, the signal receiver is not at L given that the algorithm says that orthogonal sum S = S {L1 } ⊕ ... ⊕ S{Ln } is computed. The the receiver is not at L, i.e., P [T = L|A(X) = L]. result sum is heterogeneous, because each simple support The PPV was chosen as the metric for computing basic function has a different focus. The best location is the probability numbers, because it simulates the run-time per- location with the highest degree of support according to S. formance of a localization algorithm. In particular, the PPV estimates the likelihood of the signal receiver being at L C. Assigning basic probability numbers when the algorithm states that the receiver is at L. If one is to represent each localization algorithm as a simple support function, the question arises as to how to IV. E XPERIMENTS assign the basic probability numbers with which each simple Tables I and II show the PPV numbers computed from the support function supports the location on which it is focused. robot’s validation runs. Table I shows the PPV numbers for One possibility is to compute the basic probability numbers in the 0.5 meter proximity and Table II shows the PPV numbers terms of true and false positives and true and false negatives. for the 1 meter proximity. In both tables, DST1 denotes the A true positive is defined as A(X) = L and T = L, where homogeneous combination of simple support functions while T is the true location and L is a location output by the DST2 denotes the heterogeneous combination. To analyze the algorithm. A true negative is defined as A(X) = L and results, it was agreed to discretize the performance R of each T = L. A false positive is defined as A(X) = L and T = L. algorithm into three intervals: strong (0.90 ≤ R), average A false negative is defined as A(X) = L and T = L. (0.80 ≤ R < 0.90), and weak (R < 0.80).
  • 6. The following observations were made. First, when the term sensor fusion in the presented conclusions refers only performance of all three algorithms is strong, DST1 and to the sensor fusion methods described in this paper, i.e., DST2 either maintained the same level of performance or when the fused algorithms are represented as DST simple slightly improved it. For example, Table I column 1 shows support functions that subsequently are fused homogeneously that, at location 1, the three algorithms, i.e., Bayesian, C4.5, or heterogeneously. and ANN, performed at 0.98, 0.94, and 0.98, respectively. The performance numbers for DST1 and DST2 at the same R EFERENCES location and proximity are both 1.0. The same behavior can [1] M. Addlesee, R. Curwen, S. Hodges, J. Newman, P. Steggles, and A. be observed within 0.5 meter and 1 meter of location 2. As Ward, “Implementing a Sentient Computing System,” IEEE Computer, shown in Table I column 2, within 0.5 meter of location 2, pp. 2-8, August 2001. the three algorithms performed at 0.95, 0.95, and 0.94. At [2] R.G. Golledge, J.R. Marston, and C.M. Costanzo, “Assistive Devices and Services for the Disabled: Auditory Signage and the Accessible City the same location and proximity, DST1 performed at 0.97 for Blind and Vision Impaired Travelers,” Technical Report UCB-ITS- and DST2 at 0.98. As shown in Table II column 2, within PWP-98-18: Department of Geography, University of California Santa 1 meter of location 2, the three algorithms performed at Barbara, 1998. [3] T. Henderson and E. Shilcrat, “Logical Sensor Systems,” Journal of 0.91, 0.91, and 0.93. At the same location and proximity, Robotic Systems, 2(1):169-193, 1984. DST1 performed at 0.89 and DST2 at 0.9. Second, when [4] A. Howard, S. Siddiqi, and G. S. Sukhatme, “An experimental study of all three algorithms performed weakly, DST1 significantly localization using wireless Ethernet,” The 4th International Conference on Field and Service Robotics, July 2003, Lake Yamanaka, Japan. improved performance, while DST2 remained on the same [5] I. Kramosil, Probabilistic Analysis of Belief Functions, Kluwer Aca- weak level. For example, Table I column 4 shows that within demic Publishers: New York, NY, 2001. 0.5 meter of location 4, the three algorithms performed at [6] V. Kulyukin, C. Gharpure, J. Nicholson, and S. Pavithran, “RFID in robot-assisted indoor navigation for the visually impaired,” IEEE/RSJ 0.65, 0.67, and 0.72. At the same location and proximity, Intelligent Robots and Systems (IROS 2004) Conference, September - DST1 achieved 0.84, a significant improvement, while DST2 October 2004, Sendai, Japan: Sendai Kyodo Printing Co. remained at 0.67. Similarly, as shown in Table II column [7] V. Kulyukin, C. Gharpure, N. De Graw, J. Nicholson, S. Pavithran, “A Robotic guide for the visually impaired in indoor environments,” 4, within 1 meter of location 4, the performance levels of Rehabilitation Engineering and Assistive Technology Society of North the three algorithms were 0.68, 0.64, and 0.67. At the same America (RESNA 2004) Conference, June 2004, Orland, FL: Avail. on location and proximity, DST1 achieved 0.80, a substantial CD-ROM. [8] V. Kulyukin, C. Gharpure, P. Sute, N. De Graw, J. Nicholson, and S. improvement, while DST2 remained at 0.68. Third, when Pavithran, “A Robotic wayfinding system for the visually impaired,” two algorithms performed strongly and one averagely, DST2 Innovative Applications of Artificial Intelligence (IAAI-04) Conference, improved the overall performance or kept on the same level July 2004, San Jose, CA: AAAI/MIT Press. [9] B. Kupiers, “The Spatial Semantic Hierarchy,” Artificial Intelligence , while DST1 behaved inconsistently. For example, as shown 119:191-233, 2000. in Table II column 1, DST2 remained on the same level, [10] H.E. Kyburg, “Bayesian and Non-Bayesian Evidential Updating,” while DST1’s performance worsened. However, as shown in Artificial Intelligence, 31(3):271-293, 1987. [11] http://www.medicine.uiowa.edu/Path_Handbook/, Table I column 5 and Table II column 5, both DST1 and Laboratory Services Handbook, Department of Pathology, The DST2 raised the performance level significantly at location University of Iowa. 5. Fourth, localization at the proximity of 0.5 meter was [12] A.M. Ladd, K. Bekris, A. Rudys, G. Marceau, L. Kavraki, and D. Wallach, “Robotics-Based Location Sensing using Wireless Ethernet,” overall better than at the proximity of 1 meter, because the Eighth Annual International Conference on Mobile Computing and location areas were further apart and the wireless signals were Networking (MobiCom), September 2002, Atlanta, GA: ACM. not confounded. Fifth, the localization performance dropped [13] J.F. Lemmer, “Confidence Faction, Empiricism, and the Dempster- Shafer Theory of Evidence,” in Uncertainty in Artificial Intelligence, at locations 3 and 4, because the locations were only 3 L.N. Kanal and J.F. Lemmer, Eds. Elsevier Scientific Publishers: Ams- meters apart from each other and cross misclassification was terdam, The Netherlands, 1986. frequently observed. [14] J.R. Marston and R. G. Golledge, “Towards an Accessible City: Removing Functional Barriers for the Blind and Visually Impaired: A Case for Auditory Signs,” Technical Report: Department of Geography, V. C ONCLUSION University of California at Santa Barbara, 2000. [15] T. Mitchell, Machine Intelligence, McGraw Hill: New York, NY, 1997. The following tentative conclusions can be made from the [16] R.R. Murphy, “Dempster-Shafer Theory for Sensor Fusion in Au- above observations. First, when all algorithms whose outputs tonomous Mobile Robots,” IEEE Transactions on Robotics and Automa- tion, 14(2), April 1998. are fused perform strongly, the addition of sensor fusion is [17] J. Pearl, Probabilistic Reasoning in Intelligent Systems: Networks of likely to improve the overall performance and move it to 1.0. Plausible Inference, Morgan Kaufmann: San Mateo, CA, 1988. When all algorithms perform weakly, homogeneous sensor [18] D.A. Ross, “Talking Braille: Making Braille Signage Accessible at a Distance,” Rehabilitation Engineering and Assistive Technology Society fusion is likely to improve performance significantly. Third, of North America (RESNA-2004) Conference, Orlando, FL, June 2004. if possible, locations should be selected further apart so as [19] O. Serrano, “Robot localization using wireless networks,” Technical not to confound wireless signals. It should be noted that these Report: Departmento de Informatica, Estadistica y Telematica, Univer- sidad Rey Juan Carlos, Mostoles, Spain, 2003. conclusions apply only to wireless localization indoors and [20] G. Shafer, A Mathematical Theory of Evidence, Princeton University are not to be interpreted as general recommendations. The Press: Princeton University, 1976.
  • 7. [21] P. Smets, “The Combination of Evidence in the Transferrable Belief Model,” IEEE Transactions on Pattern Analysis and Machine Intelli- gence, 12:447-458, 1990.