PySense aims at bringing wireless sensor (and "internet of things") macroprogramming to the audience of Python programmers. WSN macroprogramming is an emerging approach where the network is seen as a whole and the programmer focuses only on the application logic. The PySense runtime environment
partitions the code and transmits code snippets to the right nodes finding a balance between energy
consumption and computing performances.
5. Decentralization
• In today cloud computing
“computation costs less then
storage” → “move the
computation to data”
• In today wireless networks
“computation costs less then
communication” → move the
computation to data (sensors)
5
6. Decentralization
• Wireless Sensor Computing
– not only simple sensors connected to
a central computer, but rather
elements capable of computation
in a distributed system
• Computation Vs Communication
– One byte sent demands 100 times
the energy of an integer instruction
6
7. Programming models
• Node-level programming
– program for each node type (error
prone, difficult, only for experts)
• Network as DB
– Good but limited to queries (TinyDB)
• Macroprogramming
– Program the net as a whole, the
middleware partition the code on the
nodes automatically
7
8. Use case
• given an energy consumption
model E and an application code C,
there exists a partitioning of code
C={c1,c2,...,cn} and a set Tx of
transmissions Tx={tx1,tx2,...,txk}
which is optimal for E
8
9. PySense
• What is PySense
– A programming model for Python
– A middleware based on Python
decorators and API
– It runs on:
• Base Runtime Environment (based on
Python 2.6)
• Remote Runtime Environment (based on
Python-on-a-chip)
9
10. What is not PySense
• Not ready to use solution
• Not bound to a specific network
topology (mesh, clustering, ...) or
network protocol stack
(zigbee,tcp,...)
• Not efficient in terms of memory
used by nodes
10
11. Motivation
• Python:
– easy to use,
– easy to learn,
– rapid prototyping
And because we
like to go beyond
the limits
11
12. What is a python
decorator
Syntactic sugar
@foo
def f(x):<implementation>
Equivalent to call
foo(f)
where foo returns a function
12
13. What is a python
decorator
def foo(func):
def my_f(*args):
do_something_pre()
func(*args)
do_something_post()
return my_f
@foo
def f(x):
return x*x
13
16. @mote
@mote
class M:
def getX(self):pass
def setY(self,y):pass
Finds a mote m with X,Y
m=M()
m.getX() Read X from m, translated
in a network message
from the invoker to m
16
17. @mote Translated into
class M: a network message:
def getX(self):pass TO <addr> CALL f x,y,...
@onboard
def f(self,args):<code to move on node>
@onbase
def g(self,args):<code to run on base>
@auto
def h(self,args):<some code>
17
18. Simple program
r=Region(“/somewhere”) #somewhere
@mote
class CO2Sense:
def getConc(self): A lot of long range messages
pass exchanged between base
and remotes
values=
[c.getConc() for c in r.items(CO2Sense)]
avg = sum(values)/len(values)
18
19. A little better
@mote
class CO2Sense:
def getConc(self):pass The avg () will be run
In one remote elected
class CO2Group(Group): head of group
@onboard
def avg(self):
return sum([m.getConc() for m in self.motes])
/ len(self.motes)
Short range
messages
avg=CO2Group(r.items(CO2Sense)).avg()
19
20. Where we are
• API and decorators implemented
and tested on PC processes
running python-on-a-chip
interpreter and pipes for I/O
• Todo:
– @auto
– Deployment on real boards … we
want a real test bed not simulation
20
21. A lot to do
• Approach #1 Hey … this is a position paper
– running a VM and interpreter on the
mote
– Memory demanding
• Approach #2
– Compiling @onboard python on native
code
– Need OTA reconf
21
22. VM on board
MBED board
64KB RAM
Clock 100MHz
USB, ETH, ...
Running python
and HTTP
is too demanding
22