This document discusses robot swarms and swarm robotics. It introduces marXbot, a miniature mobile robot with various sensors that can dock with other robots. It discusses problems with swarm robotics like noise and uncertainty. It then covers using action logics and Markov decision processes to model probabilistic behavior in robot swarms. Finally, it discusses reinforcement learning techniques like hierarchical reinforcement learning and decomposition that can help address challenges of modeling large state spaces.
Robot Swarms as Ensembles of Cooperating Components - Matthias Holzl
1. Robot Swarms as Ensembles of
Cooperating Components
Matthias Hölzl
With contributions from Martin Wirsing, Annabelle Klarl
AWASS
Lucca, June 24, 2013
www.ascens-ist.eu
3. marXbot
Miniature mobile robot developed by EPFL
Rough-terrain mobility
Robots can dock to other robots
Many sensors
Proximity sensors
Gyroscope
3D accelerometer
RFID reader
Cameras
. . .
ARM-based Linux system
Gripper for picking up items
Matthias Hölzl 3
5. Problems
Noise, sensor resolution
Extracting information from sensor data
Unforeseen situations
Uncertainty about the environment
Performing complex actions when intermediate
results are uncertain
. . .
Matthias Hölzl 5
6. Action Logics
Logics that can represent change over time
Probabilistic behavior can be modeled (but is cumbersome)
Matthias Hölzl 6
7. Markov Decision Processes
pos = (x,y)
pos = (x,y+1) pos = (x+1,y+1)
pos = (x + 1,y)
e / ...
s, n / ...
s / 0.9 / -0.1
e,w / 0.025 / -0.1
w / ...
s,n / ...
e / ...
s, n / ...
w/ ...
s,n / ...
n / 0.9 / -0.1
e, w / 0.025 / -0.1
s,n,w,e / 0.05 / -0.1
Matthias Hölzl 7
10. Reinforcement Learning
General idea:
Figure out the expected
value of each action
in each state
Pick the action with
the highest expected
value (most of the
time)
Update the expectations
according to the actual
rewards
Matthias Hölzl 10
11. How well does this work?
Rather well for small problems
But: state explosion
Matthias Hölzl 11
16. n-armed Bandits
S
search / 1.0 / N(0.1, 1.0)
coll-known / 1.0 / N(0.3, 3.0)
Choice between n actions
Reward depends probabilisticly on
the action choice
No long-term consequences
Simplest form of TD-learning
Matthias Hölzl 16
17. Flat Learning
XXXXXXXXXXXXXXXXXXXXXXXX Target: (0 0)
XXTT XX XX
XX XXXXXXXXXX XXXX Choices: (N E S W)
XXXXXX XX XX XX Q-values:
XX XX XXXXXXXX XX #((N (Q -1.8))
XX XXXXXX XX XX (E (Q -1.8))
XX XX XXXXXXXX (S (Q -2.25))
XXXXXXXXXX XX XX (W (Q 2.76)))
XX XX XXXX XX Recommended choice is W
XX XX XX XX XX XX
XX XX RR XX
XXXXXXXXXXXXXXXXXXXXXXXX
Matthias Hölzl 17
18. Flat Learning
(defun simple-robot ()
(call (nav (target-loc (robot-env)))))
(defun nav (loc)
(until (equal (robot-loc) loc)
(with-choice navigate-choice (dir ’(N E S W))
(action navigate-move dir))))
Matthias Hölzl 18