This document provides an overview of sensor node hardware platforms and programming challenges. It discusses the main components of a basic sensor node, including the controller, memory, sensors/actuators, communication, and power supply. It describes three categories of sensor node hardware: augmented general-purpose computers, dedicated embedded sensor nodes like the Berkeley Motes, and system-on-chip nodes. It then focuses on the Berkeley Motes, outlining their architecture including dual CPU design, memory, radio communication, and energy consumption of components. The document emphasizes that programming for resource-constrained sensor node hardware presents challenges in optimizing for small memory footprints.
3. Pleasereadthis disclaimer beforeproceeding:
This document is confidential and intended solely for the educational purpose of
RMK Group of Educational Institutions. If you have received this document
through email in error, please notify the system manager. This document
contains proprietary information and is intended only to the respective group /
learning community as intended. If you are not the addressee you should not
disseminate, distribute or copy through e-mail. Please notify the sender
immediately by e-mail if you have received this document by mistake and delete
this document from your system. If you are not the intended recipient you are
notified that disclosing, copying, distributing or taking any action in reliance on
the contents of this information is strictly prohibited.
4. EC8702 - AD HOC AND
WIRELESS SENSOR
NETWORKS
Department
Batch/Year
Created by
Date
: ECE
: 2017-2021/IV
: Darwin Nesakumar A
: 2.10.2020
R. M. K. ENGINEERING
COLLEGE
5. Table of Contents
S.No Contents Page
Numb
er
1 Course Objectives 7
2 Pre Requisites 8
3 Syllabus 9
4 Course Outcomes 11
5 CO- PO/PSO Mapping 12
6 UNIT 4 Sensor Network Security 13
6.1 Lecture Plan 14
6.2 Activity based learning 15
6.3 Lecture Notes 16
Sensor Node Hardware
19
Berkeley Motes, Programming Challenges
20
Node-level software platforms TinyOS, 22
CONTIKIOS, nesC, 26
Node-level Simulators
29
NS2 and its extension to sensor networks
31
COOJA, TOSSIM
33
Programming beyond individual nodes
36
State centric programming
41
6. Table of Contents
S.No Contents Page
Number
6.4 Assignments 43
6.5 Part A Q & A 44
6.6 Part B Q 47
6.7 Supportive online Certification
courses
48
6.8 Real time Applications in day to day
life and to Industry
49
6.9 Contents beyond the Syllabus 50
7 Assessment Schedule 54
8 Prescribed Text Books & Reference Books 55
9 Mini Project suggestions 56
7. 1. COURSE OBJECTIVES
The student should be made to
Learn Ad hoc network and Sensor Network fundamentals
Understand the different routing protocols
Have an in-depth knowledge on sensor network architecture and design issues
Understand the transport layer and security issues possible in Ad hoc and Sensor
networks
Have an exposure to mote programming platforms and tools
9. 3. SYLLABUS
EC8702 - AD HOC AND WIRELESS SENSOR NETWORKS
L T P C
3 0 0 3
UNIT I AD HOC NETWORKS – INTRODUCTION AND ROUTING
PROTOCOLS 9
Elements of Ad hoc Wireless Networks, Issues in Ad hoc wireless networks, Example
commercial applications of Ad hoc networking, Ad hoc wireless Internet, Issues in
Designing a Routing Protocol for Ad Hoc Wireless Networks, Classifications of Routing
Protocols, Table Driven Routing Protocols – Destination Sequenced Distance Vector
(DSDV), On–Demand Routing protocols –Ad hoc On–Demand Distance Vector Routing
(AODV).
UNIT II SENSOR NETWORKS – INTRODUCTION &
ARCHITECTURES 9
Challenges for Wireless Sensor Networks, Enabling Technologies for Wireless Sensor
Networks, WSN application examples, Single-Node Architecture – Hardware
Components, Energy Consumption of Sensor Nodes, Network Architecture – Sensor
Network Scenarios, Transceiver Design Considerations, Optimization Goals and Figures
of Merit.
UNIT III WSN NETWORKING CONCEPTS AND PROTOCOLS 9
MAC Protocols for Wireless Sensor Networks, Low Duty Cycle Protocols And Wakeup
Concepts – S-MAC, The Mediation Device Protocol, Contention based protocols –
10. 3. SYLLABUS
PAMAS, Schedule based protocols – LEACH, IEEE 802.15.4 MAC protocol, Routing
Protocols Energy Efficient Routing, Challenges and Issues in Transport layer protocol.
UNIT IV SENSOR NETWORK SECURITY 9
Network Security Requirements, Issues and Challenges in Security Provisioning,
Network Security Attacks, Layer wise attacks in wireless sensor networks, possible
solutions for jamming, tampering, black hole attack, flooding attack. Key Distribution
and Management, Secure Routing – SPINS, reliability requirements in sensor
networks.
UNIT V SENSOR NETWORK PLATFORMS AND TOOLS 9
Sensor Node Hardware – Berkeley Motes, Programming Challenges, Node-level
software platforms – TinyOS, nesC, CONTIKIOS, Node-level Simulators – NS2 and its
extension to sensor networks, COOJA, TOSSIM, Programming beyond individual
nodes – State centric programming.
TOTAL HOURS : 45
11. 4. COURSE OUTCOMES
CO
No
Course Outcomes
Highest
Cognitive
Level
CO1 Know the basics of Ad hoc networks and Wireless
Sensor Networks
K2
CO2 Apply this knowledge to identify the suitable routing
algorithm based on the network and user requirement
K3
CO3 Apply the knowledge to identify appropriatephysical
and MAC layer protocols
K3
CO6 Understand the transport layer and security issues
possible in Ad hoc and sensor networks
K2
CO5 Be familiar with the OS used in Wireless Sensor
Networks and build basic modules
K2
CO6 Understand the sensor network simulation platforms
and tools
K2
14. S.No
Topic
No.
of
Periods
Proposed
Date
Actual
Date
Pertaining
CO
Taxonomy
level
Mode
of
Delivery
Reason
for
Deviation
1
Sensor Node
Hardware
1 CO6 K1
Reme
mber
PowerPoi
nt
through
online
2
Berkeley Motes,
Programming
Challenges
1 CO6 K1
Reme
mber
PowerPoi
nt
through
online
3 Node-level
software
platforms
TinyOS,
1 CO6 K1
Reme
mber
PowerPoi
nt
through
online
4
CONTIKIOS,
nesC,
1 CO6 K1
Reme
mber
PowerPoi
nt
through
online
5
Node-level
Simulators
1 CO6 K1
Reme
mber
PowerPoi
nt
through
online
6 NS2 and its
extension to
sensor
networks
1 CO6 K1
Reme
mber
PowerPoi
nt
through
online
7
COOJA, TOSSIM
1 CO6 K1
Reme
mber
PowerPoi
nt
through
online
6.1 LECTURE PLAN
Total No. of Periods : 9
16. 6.2 ACTIVITY BASED LEARNING
Activity 1:
Topic Name
Name of theActivity
Description
: Programming challenges
: Fishbowl debate
: For example, the person on left takes one position
on a topic for debate, the person on right takes the
opposite position, and the person in the middle
takes notes and decides which side is the most
convincing and provides an argument for his or her
choice.
Students can get an idea about the Programming challenges
Activity 2:
Topic Name
Name of theActivity
Description
: Network Simulator 2 – Operating Steps
: Think‐Pair‐Share
: Students write a response and then share it with a
student nearby.
Students clarify their positions and discuss points of agreement and disagreement.
The instructor can use several answers to illustrate important points or facilitate a
whole class discussion.
Activity 3:
Topic Name
Name of theActivity
Description
: Node Level Simulators
: Seminar
: Any one of the students given with the seminar topic
to explain about Node Level Simulators
17. UNIT V SENSOR NETWORK PLATFORMS AND TOOLS
Sensor Node Hardware – Berkeley Motes, Programming Challenges, Node-level
software platforms – TinyOS, nesC, CONTIKIOS, Node-level Simulators – NS2 and its
extension to sensor networks, COOJA, TOSSIM, Programming beyond individual
nodes – State centric programming.
Introduction
When choosing the hardware components for a wireless sensor node, evidently
the application’s requirements play a decisive factor with regard mostly to size,
costs, and energy consumption of the nodes. In more realistic applications, the
mere size of a node is not so important; rather, convenience, simple power supply,
and cost are more important.
A basic sensor node comprises five main components:
Controller: A controller to process all the relevant data, capable of executing
arbitrary code. Memory: Some memory to store programs and intermediate data;
usually, different types of memory are used for programs and data.
Sensors and actuators: The actual interface to the physical world: devices that
can observe or control physical parameters of the environment.
Communication: Turning nodes into a network requires a device for sending and
receiving information over a wireless channel.
Power supply: As usually no tethered power supply is available, some form of
batteries are necessary to provide energy. Sometimes, some form of recharging by
obtaining energy from the environment is available as well. The blocks of basic
sensor is given in fig.1
Fig.1 Block diagram of senor node
18. Each of these components has to operate balancing the trade-off between as
small an energy consumption as possible on the one hand and the need to fulfill
their tasks on the other hand.
A real-world sensor network application most likely has to incorporate all these
elements, subject to energy, bandwidth, computation, storage, and real-time
constraints. With ad hoc deployment and frequently changing network topology, a
sensor network application can hardly assume an always-on infrastructure that
provides reliable services such as optimal routing, global directories, or service
discovery. There are two types of programming for sensor networks, those carried
out by end users and those performed by application developers.
An end user may view a sensor network as a pool of data and interact with the
network via queries. Just as with query languages for database systems like SQL,
a good sensor network programming language should be expressive enough to
encode application logic at a high level of abstraction, and at the same time be
structured enough to allow efficient execution on the distributed platform. Ideally,
the end users should be shielded away from details of how sensors are organized
and how nodes communicate.
An application developer must provide end users of a sensor network with the
capabilities of data acquisition, processing, and storage. Unlike general distributed
or database systems, collaborative signal and information processing (CSIP)
software comprises reactive, concurrent, distributed programs running on ad hoc,
resource-constrained, unreliable computation and communication platforms.
Developers at this level have to deal with all kinds of uncertainty in the real world.
For example, signals are noisy, events can happen at the same time,
communication and computation take time, communications may be unreliable,
battery life is limited, and so on.
19. I SENSOR NODE ARCHITECTURE
SENSOR NODE HARDWARE
Sensor node hardware can be grouped into three categories, each of which
entails a different set of trade-offs in the design choices.
Augmented general-purpose computers: Examples include low power PCs,
embedded PCs (e.g., PC104), custom-designed PCs and various personal digital
assistants (PDA). These nodes typically run off-the-shelf operating systems such
as Win CE, Linux, or real-time operating systems and use standard wireless
communication protocols such as Bluetooth or IEEE 802.11. Because of their
relatively higher processing capability, they can accommodate a wide variety of
sensors, ranging from simple microphones to more sophisticated video cameras.
Compared with dedicated sensor nodes, PC-like platforms are more power hungry.
However, when power is not an issue, these platforms have the advantage that
they can leverage the availability of fully supported networking protocols, popular
programming languages, middleware, and other off-the-shelf software.
Dedicated embedded sensor nodes: Examples include the Berkeley mote family,
the UCLA Medusa family [202], Ember nodes,2 and MIT µAMP [32]. These
platforms typically use commercial off-the-shelf (COTS) chip sets with emphasis on
small form factor, low power processing and communication, and simple sensor
interfaces. Because of their COTS CPU, these platforms typically support at least
one programming language, such as C. However, in order to keep the program
footprint small to accommodate their small memory size, programmers of these
platforms are given full access to hardware but barely any operating system
support. A classical example is the TinyOS platform and its companion
programming language, nesC.
System-on-chip (SoC) nodes: Examples of SoC hardware include smart dust, the
BWRC picoradio node [187], and the PASTA node. Designers of these platforms try
to push the hardware limits by fundamentally rethinking the hardware architecture
trade-offs for a sensor node at the chip design level. The goal is to find new ways
of integrating CMOS, MEMS, and RF technologies to build extremely low power
and small footprint sensor nodes that still provide certain sensing, computation,
and communication capabilities. Most of these platforms are currently in the
research pipeline with no predefined instruction set, there is no software platform
support available.
20. II BERKELEY MOTES
Among these hardware platforms, the Berkeley motes, due to their small form
factor, open source software development, and commercial availability, have gained
wide popularity in the sensor network research community.
Berkeley Motes
The Berkeley motes are a family of embedded sensor nodes sharing roughly the
same architecture. It shows a comparison of a subset of mote types.
Example
The MICA motes have a two-CPU design. The main microcontroller (MCU), an
Atmel ATmega103L, takes care of regular processing. A separate and much less
capable coprocessor is only active when the MCU is being reprogrammed. The
ATmega103L MCU has integrated 512 KB flash memory and 4 KB of data memory.
Given these small memory sizes, writing software for motes is challenging. Ideally,
programmers should be relieved from optimizing code at assembly level to keep
code footprint small. However, high-level support and software services are not
free. Being able to mix and match only necessary software components to support
a particular application is essential to achieving a small footprint. The node
architecture is given in the fig.2
Fig.2 Node Architecture
21. In addition to the memory inside the MCU, a MICA mote also has a separate 512
KB flash memory unit that can hold data. Since the connection between the MCU
and this external memory is a low-speed serial peripheral interface (SPI) protocol,
the external memory is more suited for storing data for later batch processing
than for storing programs. The RF communication on MICA motes uses the
TR1000 chip set (from RF Monolithics, Inc.) operating at 916 MHz band. With
hardware accelerators, it can achieve a maximum of 50 kbps raw data rate. MICA
motes implement a 40 kbps transmission rate.
The transmission power can be digitally adjusted by software though a
potentiometer. The maximum transmission range is about 300 feet in open space.
Like other types of motes in the family, MICA motes support a 51 pin I/O
extension connector. Sensors, actuators, serial I/O boards, or parallel I/O boards
can be connected via the connector. A sensor/ actuator board can host a
temperature sensor, a light sensor, an accelerometer, a magnetometer, a
microphone, and a beeper. The serial I/O(UART) connection allows the mote to
communicate with a PC in real time. The parallel connection is primarily for
downloading programs to the mote. The energy consumption of components on a
MICA mote a radio transmission bears the maximum power consumption. The
energy that can send one packet only supports the radio receiver for about 27 ms.
Another observation is that there are huge differences among the power
consumption levels in the active mode, the idle mode, and the suspend mode of
the MCU. It is thus worthwhile from an energy-saving point of view to suspend
the MCU and the RF receiver as long as possible. Table 1. explains the
characteristics of the MicaZ, Mica2 and Mica2dot processor boards.
22. III NETWORK PROGRAMMING
CHALLENGES
Network Programming Challenges
Traditional programming technologies rely on operating systems to provide
abstraction for processing, I/O, networking, and user interaction hardware. When
applying such a model to programming networked embedded systems, such as
sensor networks, the application programmers need to explicitly deal with
message passing, event synchronization, interrupt handling, and sensor reading.
As a result, an application is typically implemented as a finite state machine (FSM)
that covers all extreme cases: unreliable communication channels, long delays,
irregular arrival of messages, simultaneous events etc.
For resource-constrained embedded systems with real-time requirements, several
mechanisms are used in embedded operating systems to reduce code size,
improve response time, and reduce energy consumption. Microkernel
technologies modularize the operating system so that only the necessary parts
are deployed with the application. Real-time scheduling allocates resources to
more urgent tasks so that they can be finished early. Event-driven execution
allows the system to fall into low-power sleep mode when no interesting events
need to be processed. At the extreme, embedded operating systems tend to
expose more hardware controls to the programmers, who now have to directly
face device drivers and scheduling algorithms, and optimize code at the assembly
level. Although these techniques may work well for small, stand-alone embedded
systems, they do not scale up for the programming of sensor networks for two
reasons:
Sensor networks are large-scale distributed systems, where global
properties are derivable from program execution in a massive number
of distributed nodes. Distributed algorithms themselves are hard to
implement, especially when infrastructure support is limited due to the ad hoc
formation of the system and constrained power, memory, and bandwidth
resources.
As sensor nodes deeply embed into the physical world, a sensor network should
be able to respond to multiple concurrent stimuli at the speed of changes of the
physical phenomena of interest.
23. There no single universal design methodology for all applications. Depending on
the specific tasks of a sensor network and the way the sensor nodes are
organized, certain methodologies and platforms may be better choices than
others. For example, if the network is used for monitoring a small set of
phenomena and the sensor nodes are organized in a simple star topology, then a
client-server software model would be sufficient. If the network is used for
monitoring a large area from a single access point (i.e., the base station), and if
user queries can be decoupled into aggregations of sensor readings from a subset
of nodes, then a tree structure that is rooted at the base station is a better choice.
However, if the phenomena to be monitored are moving targets, as in the target
tracking, then neither the simple client-server model nor the tree organization is
optimal. More sophisticated design and methodologies and platforms are required.
The fig.3 shows the typical functions at the different layers.
Fig.3 Functions at different layers
24. IV NODE LEVEL SOFTWARE
PLATFORMS
Node level Software Platforms
A node-level platform can be a node centric operating system, which provides
hardware and networking abstractions of a sensor node to programmers, or it can
be a language platform, which provides a library of components to programmers.
A typical operating system abstracts the hardware platform by providing a set of
services for applications, including file management, memory allocation, task
scheduling, peripheral device drivers, and networking. For embedded systems,
due to their highly specialized applications and limited resources, their operating
systems make different trade-offs when providing these services.
TinyOS and TinyGALS are two representative examples of node-level
programming tools.
Operating System: TinyOS
TinyOS aims at supporting sensor network applications on resource constrained
hardware platforms, such as the Berkeley motes. Like many operating systems,
TinyOS organizes components into layers. the lower a layer is, the “closer” it is to
the hardware; the higher a layer is, the “closer” it is to the application. In addition
to the layers, TinyOS has a unique component architecture and provides as a
library a set of system software components. A component specification is
independent of the component implementation.
Let us consider a TinyOS application example—FieldMonitor, where all nodes in a
sensor field periodically send their temperature and photo sensor readings to a
base station via an ad hoc routing mechanism. A diagram of the FieldMonitor
application, where blocks represent TinyOS components and arrows represent
function calls among them. The directions of the arrows are from callers to
callees.
To explain in detail the semantics of TinyOS components, let us first look at the
Timer component of the FieldMonitor application, as shown in Figure . This
component is designed to work with a clock,
25. Which is a software wrapper around a hardware clock that generates periodic
interrupts. The method calls of the Timer component are shown in the figure as
the arrowheads. An arrowhead pointing into the component is a method of the
component that other components can call. An arrowhead pointing outward is a
method that this component requires another layer component to provide. The
absolute directions of the arrows, up or down, illustrate this component’s
relationship with other layers.
A program executed in TinyOS has two contexts, tasks and events, which provide
two sources of concurrency. Tasks are created (also called posted) by components
to a task scheduler. The default implementation of the TinyOS scheduler maintains
a task queue and invokes tasks according to the order in which they were posted.
Thus tasks are deferred computation mechanisms. Tasks always run to completion
without preempting or being preempted by other tasks. Thus tasks are non
preemptive. The scheduler invokes a new task from the task queue only when the
current task has completed. When no tasks are available in the task queue, the
scheduler puts the CPU into the sleep mode to save energy. The ultimate sources
of triggered execution are events from hardware : clock, digital inputs, or other
kinds of interrupts. The execution of an interrupt handler is called an event
context. The processing of events also runs to completion, but it preempts tasks
and can be preempted by other events.
Another trade-off between non preemptive task execution and program
reactiveness is the design of split-phase operations in TinyOS. A call to a splitp
hase operation returns immediately, without actually performing the body of the
operation. The true execution of the operation is scheduled later; when the
execution of the body finishes, the operation notifies the original caller through a
separate method call. An example of a split-phase operation is the packet send
method in the Active Messages (AM) component. Sending a packet is a long
operation, involving converting the packets to bytes, then to bits, and ultimately
driving the RF circuits to send the bits one by one. Without a split-phase
execution, sending a packet will block the entire system from reacting to new
events for a significant period of time.
In TinyOS, resource contention is typically handled through explicit rejection of
concurrent requests. All split-phase operations return Boolean values indicating
whether a request to perform the operation is accepted. To avoid loss of packets, a
queue should be incorporated by the caller if necessary. Using a component
architecture that contains all variables inside the components and disallowing
dynamic memory allocation reduces the memory management overhead and
makes the data memory usage statically analyzable. The simple concurrency
model allows high concurrency with low thread maintenance overhead.
26. Imperative Language: nesC
nesC is an extension of C to support and reflect the design of TinyOS. It provides
a set of language constructs and restrictions to implement TinyOS components
and applications. A component in nesC has an interface specification and an
implementation. To reflect the layered structure of TinyOS, interfaces of a nesC
component are classified as provides or uses interfaces. A provides interface is a
set of method calls exposed to the upper layers, while a uses interface is a set of
method calls hiding the lower layer components. Methods in the interfaces can be
grouped and named. Although they have the same method call semantics, nesC
distinguishes the directions of the interface calls between layers as event calls and
command calls. An event call is a method call from a lower layer component to a
higher layer component, while a command is the opposite.
How they are used in the components promotes the reusability of standard
interfaces? A component can provide and use the same interface type, so that it
can act as a filter interposed between a client and a service. A component may
even use or provide the same interface multiple times.
There are two types of components in nesC, depending on how they are
implemented: modules and configurations. Modules are implemented by
application code (written in a C-like syntax). Configurations are implemented by
connecting interfaces of existing components. The implementation part of a
module is written in C-like code. A keyword call indicates the invocation of a
command. A keyword signal indicates the triggering by an event. Configuration is
another kind of implementation of components, obtained by connecting existing
components. nesC also supports the creation of several instances of a component
by declaring abstract components with optional parameters. Abstract components
are created at compile time in configurations.
In nesC, code can be classified into two types:
Asynchronous code (AC): Code that is reachable from at least one interrupt
handler.
Synchronous code (SC): Code that is only reachable from tasks.
Thus, to correctly handle concurrency, nesC programmers need to have a clear
idea of what is synchronous code and what is asynchronous code. However, since
the semantics is hidden away in the layered structure of TinyOS, it is sometimes
not obvious to the programmers where to add atomic blocks.
27. Contiki OS
Contiki OS is open source operating system for resource constraint hardware
devices with low power and less memory. ContikiOS support the resource
constraint hardware with following features
Lower Power
Limited memory
Slow CPU
Size (Small)
Limited hardware parallelisms
Communication using radio
Low-bandwidth
Short range
The motes supported by Contiki OS are as follows
MicaZ, Wismote mote, Z1 mote, Sky Motes, ESB mote
At the kernel level it follows the event driven model, but it provides optional
threading facilities to individual processes. This kernel comprises of a lightweight
event scheduler that dispatches events to running processes. Process execution is
triggered by events dispatched by the kernel to the processes or by a polling
mechanism. This polling mechanism is used to avoid race conditions. Any
scheduled event will run to completion, however, event handlers can use internal
mechanisms for preemption. Asynchronous events and synchronous events are
supported by Contiki OS. Synchronous events are dispatched immediately to the
target process that causes it to be scheduled. On the other hand asynchronous
events are more like deferred procedure calls that are en-queued and dispatched
later to the target process. All OS facilities: senor data handling, communication,
device drivers, etc. are provided in the form of services. Each service has its
interface and implementation. Applications using a particular service need to know
the service interface and an application is not concerned about the
implementation of a service
28. Contiki does not employ any sophisticated scheduling algorithm because it is an
event-driven OS. Events are fired to the target application as they arrive. In case
of interrupts, interrupt handlers of an application runs with regard to their priority.
The architecture of Contiki is shown in the figure 4.
Fig. 4 Architecture of Contiki
Contiki provides serialized access to all resources due to events run to completion
and Contiki does not allow interrupt handlers to post new events.
Contiki provides an implementation of TCP/IP protocol stack for small 8 bit micro-
controllers (uIP). uIP does not require its peers to have a complete protocol stack,
but it can communicate with peers running a similar lightweight stack. The uIP
implementation is written in C and it has the minimum set of features needed for
a full TCP/IP stack. uIP can only support one network interface, and it supports
TCP, UDP, ICMP, and IP protocols.
Support for real-time applications is not allowed. There is no implementation of
any real-time process scheduling algorithm in Contiki. Contiki does not provide
any protocol that considers the QoS requirements of multimedia applications on
the network protocol stack side. In addition, since Contiki provides an
implementation of the micro IP stack, interactions between different layers of the
protocol stack are not possible.
29. V Node-level Simulators
1. Node-level Simulators
Node-level design methodologies are usually associated with simulators that
simulate the behavior of a sensor network on a per-node basis. Using
simulation, designers can quickly study the performance (in terms of timing,
power, bandwidth, and scalability) of potential algorithms without implementing
them on actual hardware and dealing with the vagaries of actual physical
phenomena. A node-level simulator typically has the following components:
Sensor node model: A node in a simulator acts as a software execution
platform, a sensor host, as well as a communication terminal. In order for
designers to focus on the application-level code, a node model typically
provides or simulates a communication protocol stack, sensor behaviors (e.g.,
sensing noise), and operating system services. If the nodes are mobile, then
the positions and motion properties of the nodes need to be modeled. If
energy characteristics are part of the design considerations, then the power
consumption of the nodes needs to be modeled.
Communication model: Depending on the details of modeling, communication
may be captured at different layers. The most elaborate simulators model the
communication media at the physical layer, simulating the RF propagation
delay and collision of simultaneous transmissions. Alternately, the
communication may be simulated at the MAC layer or network layer, using, for
example, stochastic processes to represent low-level behaviors.
Physical environment model: A key element of the environment within a
sensor network operates is the physical phenomenon of interest. The
environment can also be simulated at various levels of details. For example, a
moving object in the physical world may be abstracted into a point signal
source. The motion of the point signal source may be modeled by differential
equations or interpolated from a trajectory profile. If the sensor network is
passive- that is, it does not impact the behavior of the environment-then the
environment can be simulated separately or can even be stored in data files
for sensor nodes to read in. If, in addition to sensing, the network also
performs actions that influence the behavior of the environment, then a more
tightly integrated simulation mechanism is required.
Statistics and visualization: The simulation results need to be collected for
analysis.
30. Since the goal of a simulation is typically to derive global properties from the
execution of individual nodes, visualizing global behaviors is extremely important.
An ideal visualization tool should allow users to easily observe on demand the
spatial distribution and mobility of the nodes, the connectivity among nodes, link
qualities, end-to-end communication routes and delays, phenomena and their
spatio-temporal dynamics, sensor readings on each node, sensor nodes states,
and node lifetime parameters (e.g., battery power).
A sensor network simulator simulates the behavior of a subset of the sensor
nodes with respect to time. Depending on how the time is advanced in the
simulation, there are two types of execution models: cycle-driven simulation and
discrete-event simulation. A cycle-driven (CD) simulation discretizes the
continuous notion of real time into (typically regularly spaced) ticks and simulates
the system behavior at these ticks. At each tick, the physical phenomena are first
simulated, and then all nodes are checked to see if they have anything to sense,
process, or communicate. Sensing and computation are assumed to be finished
before the next tick. Sending a packet is also assumed to be completed by then.
However, the packet will not be available for the destination node until next tick.
This split-phase communication is a key mechanism to reduce cyclic
dependencies that may occur in cycle-driven simulations. Most CD simulators do
not allow interdependencies within a single tick.
Unlike cycle-driven simulators, a discrete-vent (DE) simulator assumes that the
time is continuous and an event may occur at any time. As event is 2-tuple with a
value and a time stamp indicating when the event is supposed to be handled.
Components in a DE simulation react to input events and produce output events.
In node-level simulators, a component can be a sensor node, and the events can
be communication packets; or a component can be software module within and
the events can be message passings among these nodes. Typically, components
are causal, in the sense that if an output event is computed from an input event,
then the time stamp of the output should not be earlier than that of the input
event. Non-causal components require the simulators to be able to roll back in
time, and worse, they may not define a deterministic behavior of a system. A DE
simulator typically requires a global event queue. All events passing between
nodes or modules are put in the event queue and sorted according to their
chronological order. At each iteration of the simulation, the simulator removes the
first event (the one with earliest time stamp) from the queue and triggers the
component that reacts to that event.
31. In terms of timing behavior, a DE simulator is more accurate than a CD simulator,
and as a consequence, DE simulators run slower. The overhead of ordering all
events and computation, in addition to the values and time stamps of events,
usually dominates the computation time. At an early stage of a design when only
the asymptotic behaviors rather than timing properties are of concern, CD
simulations usually require less complex components and give faster simulations.
This is partly because of the approximate timing behaviors, which make simulation
results less comparable from application to application, there is no general CD
simulator that fits all sensor network simulation tasks. Many of the simulators are
developed for particular applications and exploit application- specific assumptions
to gain efficiency.
DE simulations are sometimes considered as good as actual implementations,
because of their continuous notion of time and discrete notion of events. There
are several open- source or commercial simulators available. One class of these
simulators comprises extensions of classical network simulators, such as ns-2, J-
Sim (previously known as JavaSim), and GloMoSim/ Qualnet. The focus of these
simulators is on network modeling, protocol stacks, and simulation performance.
Another class of simulators, sometimes called software-in-the-loop simulators,
incorporate the actual node software into the simulation. For this reason, they are
typically attached to particular hardwareplatforms and are less portable. Example
include TOSSIM for Berkeley motes and Em* for Linux-based nodes such as
Sensoria WINS NG platforms.
2. NS2 and its extension to sensor networks
The simulator ns-2 is an open-source network simulator that was originally
designed for wired, IP networks. Extensions have been made to simulate
wireless/mobile networks (e.g. 802.11 MAC and TDMA MAC) and more recently
sensor networks. While the original ns-2 only supports logical addresses for each
node, the wireless/mobile extension of it introduces the notion of node locations
and a simple wireless channel model. This is not a trivial extension, since once the
nodes move, the simulator needs to check for each physical layer event whether
the destination node is within the communication range. For a large network, this
significantly slows down the simulation speed.
32. There are two widely known efforts to extend ns-2 for simulating sensor
networks: SensorSim form UCLA and the NRL sensor network extension from the
Navy Research Laboratory. SensorSim also supports hybrid simulation, where
some real sensor nodes, running real applications, can be executed together with
a simulation. The NRL sensor network extension provides a flexible way of
modeling physical phenomena in a discrete event simulator. Physical phenomena
are modeled as network nodes which communicate with real nodes through
physical layers.
The main functionality of ns-2 is implemented in C++, while the dynamics of the
simulation (e.g., time-dependent application characteristics) is controlled by Tcl
scripts. Basic components in ns-2 are the layers in the protocol stack. They
implement the handlers interface, indicating that they handle events. Events are
communication packets that are passed between consecutive layers within one
node, or between the same layers across nodes.
The key advantage of ns-2 is its rich libraries of protocols for nearly all network
layers and for many routing mechanisms. These protocols are modeled in fair
detail, so that they closely resemble the actual protocol implementations.
Examples include the following:
TCP: reno, tahoe, vegas, and SACK implementations.
MAC: 802.3, 802.11, and TDMA.
routing,
(AOPDV)
Ad hoc routing: Destination sequenced distance vector (DSDV) dynamic
source routing (DSR), ad hoc on-demand distance vector routing, and
temporarily ordered routing algorithm (TORA).
Sensor network routing: Directed diffusion, geographical routing (GEAR)
and geographical adaptive fidelity (GAF) routing.
33. TOSSIM
TOSSIM is a dedicated simulator for TinyOS applications running on one or more
Berkeley motes. The key design decisions on building TOSSIM were to make it
scalable to a network of potentially thousands of nodes, and to be able to use the
actual software code in the simulation. To achieve these goals, TOSSIM takes a
cross-compilation approach that compiles the nesC source code into components
in the simulation. The event-driven execution model of TinyOS greatly simplifies
the design of TOSSIM. By replacing a few low-level components such as the A/D
conversion (ADC), the system clock, and the radio front end, TOSSIM translates
hardware interrupts into discrete-event simulator events. The simulator event
queue delivers the interrupts that drive the execution of a node. The upper-layer
TinyOS code runs unchanged.
TOSSIM uses a simple but powerful abstraction to model a wireless network. A
network is a directed graph, where each vertex is a sensor node and each
directed edge has a bit- error rate. Each node has a private piece of state
representing what it hears on the radio channel. By setting connections among
the vertices in the graph and a bit-error rate on each connection, wireless
channel characteristics, such as imperfect channels, hidden terminal problems,
and asymmetric links can be easily modeled. Wireless transmissions are
simulated at the bit level. If a bit error occurs, the simulator flips the bit.
TOSSIM has a visualization package called TinyViz, which is a Java application that
can connect to TOSSIM simulations. TinyViz also provides mechanisms to control a
running simulation by, for example, modifying ADC readings, changing channel
properties, and injecting packets. TinyViz is designed as a communication service
that interacts with the TOSSIM event queue. The exact visual interface takes the
form of plug-ins that can interpret TOSSIM events. Beside the default visual
interfaces, users can add application- specific ones easily.
COOJA
Cooja Simulator is a cross-layer java-based wireless sensor networksimulator
distributed with Contiki. It allows the simulation of different levels from physical
to application layer, and also allows the emulation of the hardware of a set of
sensor nodes. Cooja Simulator is a network simulator specifically designed for
Wireless Sensor Networks.
COOJA is a network simulator which permits the emulation of real hardware
platforms. COOJA is the application of Contiki OS concentrating on network behavior.
COOJA is capable of simulating wireless sensor network without any particular
mote. Cooja supported following set of standards; TR 1100, TI CC2420, Contiki-RPL,
IEEE 802.15.4, uIPv6 stack and uIPv4 stack.
34. There are four propagation models in the COOJA simulator which must be
selected before starting a new simulation. The first model is constant loss Unit Disk
Graph Medium (UDGM) and it take the ideal transmission range disk in which motes
inside the transmission disk receive data packets and motes outside the
transmission disk do not get any packet. The second model is distance loss UDGM is
the extension of constant loss UDGM and it also consider the radio interferences.
Packets are transmitted with “success ratio TX” probability and packets are received
with probability of “success ratio RX”. The third model is Directed Graph Radio
Medium (DGRM) and it states the propagation delays for the radio links. Last path
loss model is multipath Ray-tracer Medium (MRM) and it uses the ray tracing
methods such as Friis formula to calculate the receiver power. MRM is also capable
of computing the diffractions, reflections and refractions along the radio links
COOJA SIMULATION INTERFACE
COOJA network simulator interface comprises of five windows. The network
window displays the physical arrangement of the motes. In order to build a
topology,
One could change the physical position of the motes. In network window, all the
different have different colors according to their functionality, i.e. sink mote has a
green color and the sender mote has the yellow color. Mote attributes, radio
environment of each mote, mote type and radio traffic between the motes could
also the seen visually in the network windows. Simulation control window helps us
to control the speed of the simulation and to pause, start and reload the current
running simulation.
Note window is used to write the theory and key points of the simulation and save
them in the note window. Cooja network simulator shows a timeline for each
mote in the running simulation. We could use timeline for visualizing the both the
power consumption and network traffic in the wireless sensor networks. In row
three for mote 1, Color of the mote shows the power state of the transceiver: if
the mote is off then it is white, on then it is gray as shown for mote 1. White and
gray color is either hardware is off or on but the red color line in the second row
shows that whenever the node hardware goes on its radio transceiver is also goes
on. In first row in timeline of mote 1, Radio transmissions are shown by blue color,
reception by green and radio interference is shown by red. The COOJA simulation
is shown in fig.5.
36. VI PROGRAMMING BEYOND
INDIVIDUAL NODES
State-centric Programming
Many sensor network applications, such as target tracking, are not simply generic
distributed programs over an ad hoc network of energy-constrained nodes. A
distinctive property of physical states, such as location, shape, and motion of
objects, is there continuity in space and time. Their sensing and control is typically
done through sequential state updates. System theories, the basis for most signal
and information processing algorithms, provide abstractions for state update, such
as:
xk+1 =f(xk,uk) –(1)
yk =g(xk,uk) –(2)
where x is the state of a system, u are the inputs, y are the outputs, k is an
integer update index over space and/or time, f is the state update function, and g
is the output or observation function.A collaboration group is a set of entities that
contribute to a state update. These entities can be physical sensor nodes, or they
can be more abstract system components such as virtual sensors or mobile agents
hopping among sensors. In this context, they are all referred to as agents. a
collaboration group provides two abstractions: its scope to encapsulate network
topologies and its structure to encapsulate communication protocols. The scope of
a group defines the membership of the nodes with respect to the group. Grouping
nodes according to some physical attributes rather than node addresses is an
important and distinguishing characteristic of sensor networks.
A geographically constrained group (GCG) consists of members within a
prespecified geographical extent. Since physical signals, especially the ones from
point targets, may propagate only to a limited extent in an environment, this kind
of group naturally represents all the sensor node. N-hop Neighborhood Group:
When the communication topology is more important than the geographical
extent, hop counts are useful to constrain group membership. An n-hop
neighborhood group (n-HNG) has an anchor node and defines that all nodes
within n communication hops are members of the group. Since it uses hop counts
rather than Euclidean distances, local broadcasting can be used to determine the
scope.
37. Publish/Subscribe Group: A group may also be defined more dynamically, by all
entities that can provide certain data or services, or that can satisfy certain
predicates over their observations or internal states. A publish/subscribe
group(PSG) comprises consumers expressing interest in specific types of data or
services and producers that provide those data or services.
Acquaintance Group: An even more dynamic kind of group is the acquaintance
group (AG), where a member belongs to the group because it was “invited” by
another member in the group. The relationships among the members may not
depend on any physical properties at the current time but may be purely logical
and historical. A member may also quit the group without requiring permission
from any other member. An AG may have a leader, serving as the rendezvous
point. When the leader is also fixed on a node or in a region, GPSR [112], ad hoc
routing trees, or directed diffusion types of protocols may facilitate the
communication between the leader and the other members. An obvious use of
this group is to monitor and control mobile agents from a base station.
PIECES (Programming and Interaction Environment for Collaborative Embedded
Systems) is a software framework that implements the methodology of state-
centric programming over collaboration groups to support the modeling,
simulation, and design of sensor network applications. It is implemented in a
mixed Java-Matlab environment.
PIECES comprises principals and port agents. Figure 6 shows the basic relations
among principals and port agents. A principal is the key component for
maintaining a piece of state. Typically, a principal maintains state corresponding to
certain aspects of the physical phenomenon of interest.11 The role of a principal is
to update its state from time to time, a computation corresponding to evaluating
function f in (1). A principal also accepts other principals’ queries of certain views
on its own state, a computation corresponding to evaluating function g in (2).
Fig.6 Relation among principals and port agents
38. A port agent may be an input, an output, or both. An output port agent is also
called an observer, since it computes outputs based on the host principal’s state
and sends them to other agents. Observers may be active or passive. An active
observer pushes data autonomously to its destination(s), while a passive observer
sends data only when a consumer requests it. A principal typically attaches a set
of observers to other principals and creates a local input port agent to receive the
information collected by the remote agents. Thus port agents capture
communication patterns among principals. The execution of principals and port
agents can be either time driven or event-driven, where events may include
physical events that are pushed to them(i.e., data-driven)or query events from
other principals or agents (i.e., demand-driven). Principals maintain state,
reflecting the physical phenomena. These states can be updated, rather than
rediscovered, because the underlying physical states are typically continuous in
time. How often the principal states need to be updated depends on the dynamics
of the phenomena or physical events. The executions of observers, however,
reflect the demands of the outputs. If an output is not currently needed, there is
no need to compute it. The notion of “state” effectively separates these two
execution flows.
Principals can form groups. A principal group gives its members a means to find
other relevant principals and attaches port agents to them. A principal may belong
to multiple groups. A port agent, however, serving as a proxy for a principal in the
group, can only be associated with one group. The creation of groups can be
delegated to port agents, especially for leader-based groups. The leader port
agent, typically of type input, can be created on a principal, and the port agent
can take group scope and structure parameters to find the other principals and
create follower port agents on them. Groups can be created dynamically, based on
the collaboration needs of principals. For example, when a tracking principal finds
that here is more than one target in its sensing region, it may create a
classification group to fulfill the need of classifying the targets Grouping of pots
shown in Fig 7.
Fig.7 Grouping of the ports
39. A group may have a limited time span. When certain collaborations are no longer
needed, their corresponding groups can be deleted. The structure of a group
allows its members to address other principals through their role, rather than their
name or logical address. For example, the only interface that a follower port agent
in a leader follower structured group needs is to send data to the leader. If the
leader moves to another node while a data packet is moving from a follower agent
to the leader, the group management protocol should take care of the dangling
packet.
A principal is hosted by a specific network node at any given time. The most
primitive type of principal is a sensing principal, which is fixed to a sensor node. A
sensing principal maintains a piece of(local)state related to the physical
phenomenon, based solely on its own local measurement history. Mobile principals
bring additional challenges to maintaining the state.
PIECES provides a mixed-signal simulator that simulates sensor network
applications at a high level. The simulator is implemented using a combination of
Java and Matlab. An event-driven engine is built in Java to simulate network
message passing and agent execution at the collaboration-group level. A
continuous-time engine is built in Matlab to simulate target trajectories, signals
and noise, and sensor front ends. The main control flow is in Java, which
maintains the global notion of time. The interface between Java and Matlab also
makes it possible to implement functional algorithms such as signal processing
and sensor fusion in Matlab, while leaving their execution control in Java. A three-
tier distributed architecture is designed through Java registrar and RMI interfaces,
so that the execution in Java and Matlab can be separately interrupted and
debugged.
Using the state-centric model, programmers decouple a global state into a set of
independently maintained pieces, each of which is assigned a principal. To update
the state, principals may look for inputs from other principals, with sensing
principals supporting the lowest-level sensing and estimation tasks.
Communication patterns are specified by defining collaboration groups over
principals and assigning corresponding roles for each principal through port
agents. A mobile principal may define a utility function, to be evaluated at
candidate sensor nodes, and then move to the best next location, all in a way
transparent to the application developer. Developers can focus on implementing
the state update functions as if they are writing centralized programs.
40. A tracking principal updates the track position state periodically. It collects local
individual position estimates from sensors close to the target by a GCG with a
leader-follower relation. The tracking principal is the leader, and all sensing
principals within a certain geographical extent centered about the current target
position estimate are the followers. The tracking principal also makes hopping
decisions based on its current position estimate and the node characteristic
information collected from its one-hop neighbors via a 1-HNG. When the principal
is initialized, it creates the agents and corresponding groups. Behind the scene,
the groups create follower agents with specific types of output, indicated by the
sensor modalities. Without further instructions from the programmer, the
followers periodically report their outputs to the input port agents. Whenever the
leader principal is activated by a time trigger, it updates the target position using
the newly received data from the followers and selects the next hosting node
based on neighbor node characteristics. Both the classification principal and the
identity management principal operate on the identity state, with the identity
management principal maintaining the “master copy” of the state. In fact, the
classification principal is created only when there is a need for classifying targets.
The classification principal uses a GCG to collect class feature information from
nearby sensing principals in the same way that tracking principals collect location
estimates. The identity management principal forms an AG with all other identity
management principals that may have relevant identity information. They become
members of a particular identity group only when targets intersect and their
identities mix. Both classification principals and identity management principals
are attached to the tracking principal for their mobility decisions. However, the
formation of an AG among these three principals also provides the flexibility that
they can make their own hopping decisions without changing their interaction
interface.
41. VII STATE-CENTRIC
PROGRAMMING
State-centric programming CSIP applications, such as target tracking, are not
generic distributed programs. Deeply rooted in these applications are the notion of
states of physical phenomena and models of their evolution over space and time.
We can represent some states centrally, as in the point target-tracking example,
but must represent others in a distributed fashion, as in the contour- tracking
case. A distinct property of physical states, such as the location, shape, and
motion of objects, is continuity in space and time. We typically handle the sensing
and control of these states through sequential state updates. System theories, the
basis for many signal processing and control algorithms, provide the following
statecentric abstraction for state updating:
xk+1 = f(xk, uk)
yk = g(xk, uk)
(1)
(2)
where x is the system state, k is an integer update index over space or time, u is
input, y is output, f is the state update function, and g is the output or observation
function. This formalization is broad enough to capture a wide variety of algorithms
in sensor fusion, signal processing, and control (for example, Kalman filtering,
Bayesian estimation, system identification, feedback control laws, automata, and so
on). State-centric programming abstractions have been successfully applied to
synchronous VLSI circuit designs and (centralized) control system designs.
Synchronous languages such as Signal (www.irisa.fr/espresso/Polychrony) and
Esterel(www.sop.inria.fr/esterel.org) and mixed-signal visual languages such as
Matlab’s Simulink (www.mathworks.com) and Ptolemy II’s CT Domain (ptolemy.
eecs.berkeley.edu) are all examples of state-centric programming models. However,
in a distributed real-time embedded system, the formulation is not as cleanly
represented as in the abstraction just given. The relationship among subsystems can
be highly dynamic.
We must address concerns such as
• Where are the state variables stored?
• Where do the inputs come from?
• Where do the outputs go?
• Where are the functions f and g evaluated?
• How long does it take to acquire the set of inputs?
• Are the inputs in uk acquired synchronously?
•Do the inputs arrive in the correct order through communication? • What is the
choice of the update interval? Are they consistent?
42. System designers cannot be entirely shielded from these issues without seriously
compromising system correctness and efficiency. These concerns address where
and when, rather than how, to perform sensing, computation, and actuation, and
play a central role in achieving the overall system performance. However,
traditional programming models and languages don’t support these
“nonfunctional” aspects of computation (related to concurrency, reactiveness,
networking, and resource management) well. We need novel design
methodologies and frameworks that provide meaningful abstractions for these
issues, so that domain experts can continue to express algorithms and write
programs in the style of these abstractions but still maintain an intuitive
understanding of where and when to perform these operations. Domain-specific
runtime systems are to support this design methodology to ensure correct and
efficient execution and allow transparent layering-in of features such as security
and reliable communication
43. 6.4 ASSIGNMENT
S.No Question K-Level CO
1 Implement the Distance Vector Routing algorithm
using Network Simulator
K2 CO6
2 Implement Link state routing algorithm Network
Simulator
K2 CO6
5
44. PART A Q & A
1. What are Berkeley Motes. (CO6-K1)
The Berkeley motes are a family of embedded sensor nodes sharing roughly the
same architecture
2. What are the resource constraint hardware with following features ContikiOS
support ? (CO6-K1)
Lower Power
Limited memory
Slow CPU
Size (Small)
Limited hardware parallelisms
Communication using radio
Low-bandwidth
Short range
3. What are the basic components of a node? (CO6-K1)
Controller
Sensors and actuators
Communication
Power supply
Memory
4. Define PIECES. ? (CO6-K1)
PIECES (Programming and Interaction Environment for Collaborative Embedded
Systems) is a software framework that implements the methodology of state-centric
programming over collaboration groups to support the modeling, simulation, and
design of sensor network applications. It is implemented in a mixed Java-Matlab
environment.
45. 5.Define TOSSIM. (CO6-K1)
TOSSIM is a dedicated simulator for TinyOS applications running on one or more
Berkeley motes. The key design decisions on building TOSSIM were to make it
scalable to a network of potentially thousands of nodes, and to be able to use the
actual software code in the simulation. To achieve these goals, TOSSIM takes a
cross-compilation approach that compiles the nesC source code into components in
the simulation.
6. Define COOJA network simulator interface. (CO6-K1)
COOJA comprises of five windows. The network window displays the physical
arrangement of the motes. In order to build a topology, One could change the
physical position of the motes. In network window, all the different have different
colors according to their functionality, i.e. sink mote has a green color and the sender
mote has the yellow color. Mote attributes, radio environment of each mote, mote
type and radio traffic between the motes could also the seen visually in the network
windows. Simulation control window helps us to control the speed of the simulation
and to pause, start and reload the current running simulation.
7. What is NS2? (CO6-K1)
The simulator ns-2 is an open-source network simulator that was originally designed
for wired, IP networks. Extensions have been made to simulate wireless/mobile
networks (e.g. 802.11 MAC and TDMA MAC) and more recently sensor networks.
While the original ns-2 only supports logical addresses for each node, the
wireless/mobile extension of it introduces the notion of node locations and a simple
wireless channel model. This is not a trivial extension, since once the nodes move,
the simulator needs to check for each physical layer event whether the destination
node is within the communication range. For a large network, this significantly slows
down the simulation speed.
8. What are the components of node-level. (CO6-K1)
Communication model:
Physical environment model
Statistics and visualization
Sensor node model
46. 9. What is meant by Contiki OS (CO6-K1)
Contiki OS is open source operating system for resource constraint hardware
devices with low power and less memory.
10. Define TinyOS (CO6-K1)
TinyOS aims at supporting sensor network applications on resource constrained
hardware platforms, such as the Berkeley motes. Like many operating systems,
TinyOS organizes components into layers. the lower a layer is, the “closer” it is to the
hardware; the higher a layer is, the “closer” it is to the application. In addition to the
layers, TinyOS has a unique component architecture and provides as a library a set
of system software components. A component specification is independent of the
component implementation.
11. What is meant by nesC? (CO6-K1)
nesC is an extension of C to support and reflect the design of TinyOS. It provides a
set of language constructs and restrictions to implement TinyOS components and
applications. A component in nesC has an interface specification and an
implementation. To reflect the layered structure of TinyOS, interfaces of a nesC
component are classified as provides or uses interfaces.
12. What are the two types of nesC? (CO6-K1)
Asynchronous code (AC): Code that is reachable from at least one interrupt
handler.
Synchronous code (SC): Code that is only reachable from tasks.
47. PART B QUESTIONS
1.Explain about the Sensor Node Architecture in detail? (CO6-K1)
2.What are the programming challenges of Sensor network tools ? (CO6-K1)
3.Explain about the following. a)TinyOS, b)nesC in detail (CO6-K1)
4. Explain in detail about the CONTIKIOS simulator (CO6-K1)
5.Write a short note on key management and distribution. (CO6-K1)
6.Explain in detail about the NS2 simulator and its extension to sensor networks
(CO6-K1)
7. Explain about the following. a) COOJA b) TOSSIM(CO6-K1)
8. Explain about the programming beyond individual nodes (CO6-K1)
9. Explain about State Centering Programming. (CO6-K1)
49. 6.8 REAL TIME APPLICATIONS IN DAY TO DAY LIFE AND TO
INDUSTRY
63
S.No Applications in Day to Daylife Applications in Industry
1 Mobile Phone Sensors Draw the diagrammatically
presented drawings
2 Colleges- Government - Local Area
Network
Banking
50. CONTENT BEYOND THE SYLLABUS
VARIOUS WIRELESS SENSOR NETWORK SIMULATORS
EmStar
The introduction of EmStar and the comparison with other simulation tools will be
discussed in this subsection.
Overview
EmStar is an emulator specifically designed for WSN built in C, and it was first
developed by University of California, Los Angeles. EmStar is a trace-driven
emulator running in real-time. People can run this emulator on Linux operating
system. This emulator supports to develop WSN application on better hardware
sensors. Besides libraries, tools and services, an extension of Linux microkernel is
included in EmStar emulator.
Merits and Limitations
EmStar contains both merits and limitations when people use it to simulate WSNs.
To the merits, firstly, the modular programming model in EmStar allows the users
to run each module separately without sacrificing the reusability of the software.
EmStar has a robustness feature that it can mitigate faults among the sensors,
and it provides many modes make debug and evaluate much easier. There is a
flexible environment in EmStar that users can freely change between deployment
and simulation among sensors. Also with a standard interfaces, each service can
easily be interconnected. EmStar has a GUI, which is very helpful for users to
control electronic devices. When using EmStar, every execution platform is written
by the same codes, which will decrease bugs when iterate the separate modes. In
addition, EmStar provides many online documents to facilities the widely use of
this emulator. However, this emulator contains some drawbacks. For example, it
can not support large number of sensors simulation, and the limited scalability will
decrease the reality of simulation, shown in Figure 5. In addition, EmStar is can
only run in real time simulation. Moreover, this emulator can only apply to iPAQ-
class sensor nodes and MICA2 motes. All these drawbacks limit the use of this
emulator. In sum, both advantages and disadvantages are included in theEmStar
design.
OMNeT++
The introduction of OMNeT++ and the comparison with other simulation tools will
be discussed in this subsection.
Overview
OMNeT++ is a discrete event network simulator built in C++. OMNeT++ provides
both a noncommercial license, used at academic institutions or non-profit
research organizations, and a commercial license, used at "for-profit"
environments.
51. This simulator supports module programming model. Users can run OMNeT++
simulator on Linux Operating Systems, Unix-like system and Windows. OMNeT++
is a popular non-specific network simulator, which can be used in both wire and
wireless area. Most of frameworks and simulation models in OMNeT++ are open
sources.
Merits and Limitations
OMNeT++ contains both merits and limitations when people use it to simulate
WSNs. To the merits, firstly, OMNeT++ provides a powerful GUI. This strong GUI
makes the tracing and debugging much easier than using other simulators.
Although initial OMNeT++ do not support the module library which is specifically
used for WSNs simulation, with the consciously contribution of the supporting
team, now OMNeT++ has a mobility framework. This simulator can support MAC
protocols as well as some localized protocols in WSN. People can use OMNeT++ to
simulate channel controls in WSNs. In addition, OMNeT++ can simulate power
consumption problems in WSNs. However, there are still some limitations on
OMNeT++ simulator. For example, the number of available protocols is not larger
enough. In addition, the compatible problem will rise since individual researching
groups developed the models separately, this makes the combination of models
difficult and programs may have high probability report bugs. In sum, both
advantages and disadvantages are included in the OMNeT++ design.
J-Sim
The introduction of J-Sim and the comparison with other simulation tools will be
discussed in this subsection.
Overview
J-Sim is a discrete event network simulator built in Java. This simulator provides
GUI library, which facilities users to model or compile the Mathematical Modeling
Language, a “text-based language” written to J-Sim models. J-Sim provides open
source models and online documents. This simulator is commonly used in
physiology and biomedicine areas, but it also can be used in WSN simulation. In
addition, J-Sim can simulate real-time processes
Merits and Limitations
J-Sim contains both merits and limitations when people use it to simulate WSNs.
To the merits, firstly, models in J-Sim have good reusability and interchangeability,
which facilities easily simulation. Secondly, J-Sim contains large number of
protocols; this simulator can also support data diffusions, routings and localization
simulations in WSNs by detail models in the protocols of J-Sim. J-Sim can simulate
radio channels and power consumptions in WSNs. Thirdly, J-Sim provides a GUI
library, which can help users to trace and debug programs. The independent
platform is easy for users to choose specific components to solve the individual
problem.
52. Fourth, comparing with NS-2, J-Sim can simulate larger number of sensor nodes,
around 500, and J-Sim can save lots of memory sizes. However, this simulator has
some limitations. The execution time is much longer than that of NS-2. Because J-
Sim was not originally designed to simulate WSNs, the inherently design of J-Sim
makes users hardly add new protocols or node components.
ATEMU
The introduction of ATEMU and the comparison with other simulation tools will be
discussed in this subsection.
Overview
ATEMU is an emulator of an AVR processor for WSN built in C; AVR is a single chip
microcontroller commonly used in the MICA platform. ATEMU provides GUI, Xatdb;
people can use this GUI to run codes on sensor nodes, debug codes and monitor
program executions. People can run ATEMU on Solaris and Linux operating
system. ATEMU is a specific emulator for WSNs; it can support users to run
TinyOS on MICA2 hardware. ATEMU can emulate not only the communication
among the sensors, but also every instruction implemented in each sensor. This
emulator provides open sources and online documents.
Merits and Limitations
ATEMU contains both merits and limitations when people use it to simulate
wireless sensor network. To the merits, firstly, ATEMU can simulate multiple sensor
nodes at the same time, and each sensor node can run different programs.
Secondly, ATEMU has a large library of a wide rage of hard devices. Thirdly, ATEMU
can provide a very high level of detail emulation in WSNs. For example, it can
emulate different sensor nodes in homogeneous networks or heterogeneous
networks. ATEMU can emulate different application run on MICA. Also users can
emulate power consumptions or radio channels by ATEMU. Fourthly, the GUI can
help users debug programs, and monitor program executions. The open source
saves the cost of simulation. ATEMU can provide an accurate model, which helps
users to give unbiased comparisons and get more realistic results. The ATEMU
components architecture is shown in Figure 6. However, this emulator also has
some limitations. For instance, although ATEMU can give a highly accuracy
results, the simulation time is much longer than other simulation tools. In
addition, ATEMU has fewer functions to simulate routing and clustering problems.
Therefore, both merits and limitation contains in ATEMU.
53. Avrora
The introduction of Avrora and the comparison with other simulation tools will be
discussed in this subsection.
Overview
Avrora is a simulator specifically designed for WSNs built in Java. Similar to
ATEMU, Avrora can also simulate AVR-based microcontroller MICA2 sensor nodes.
This simulator was developed by University of California, Los Angeles Compilers
Group. Avrora provides a wide range of tools that can be used in simulating WSNs.
This simulator combines the merits of TOSSIM and ATEMU, and limits their
drawbacks. Avrora does not provide GUI. Avrora also supports energy
consumption simulation. This simulator provides open sources and online
documents. However, this simulator has some drawbacks. It does not have GUI. In
addition, Avrora can not simulate network management algorithms because it
does not provide network communication tools.
Merits and Limitations
Avrora contains both merits and limitations when people use it to simulate WSNs.
To the merits, firstly, Avrora is an instruction-level simulator, which removes the
gap between TOSSIM and ATEMU. The codes in Avrora run instruction by
instruction, which provides faster speed and better scalability. Avrora can support
thousands of nodes simulation, and can save much more execution time with
similar accuracy. Avrora provides larger scalability than ATEMU does with
equivalent accuracy; Avrora provides more accuracy than TOSSIM does with
equivalent scales of sensor nodes. Unlike TOSSIM and ATEMU, Avrora is built in
Java language, which provides much flexibility. Avrora can simulate different
programming code projects, but TOSSIM can only support TinyOS simulation.4
54. Assessment Proposed Date Actual Date
Unit 1 Assignment
Assessment
Unit Test1
Unit 2Assignment
Assessment
Internal Assessment1
Retest for IA 1
Unit 3 Assignment
Assessment
Unit Test2
Unit 4Assignment
Assessment
Internal Assessment2
Retest for IA 2
Unit 5Assignment
Assessment
Revision Test1
Revision Test2
Model Exam
Remodel Exam
University Exam
7. Assessment Schedule
6
8
55. 8. PRESCRIBED TEXT BOOKS & REFERENCEBOOKS
TEXT BOOKS
C. Siva Ram Murthy and B. S. Manoj, ―Ad Hoc Wireless Networks Architectures
and Protocols‖,Prentice Hall, PTR, 2004. (UNIT I)
Holger Karl , Andreas willig, ―Protocol and Architecture for Wireless Sensor
Networks‖,John wiley publication, Jan 2006.(UNIT II-V)
REFERENCE BOOKS
Feng Zhao, Leonidas Guibas, ―Wireless Sensor Networks: an information
processing approach‖,Elsevier publication, 2004.
Charles E. Perkins, ―Ad Hoc Networking‖,Addison Wesley, 2000.
I.F
. Akyildiz, W.Su, Sankarasubramaniam,E. Cayirci, ―Wireless sensor networks:
a survey‖,computer networks, Elsevier, 2002, 394 -422.
56. 9. MINI PROJECT SUGGESSTIONS
1. Secure And Warning System In Hair Pin Bends In Hill Stations Based On RF Technology
2. Anti-Theft Protection Of Vehicle By GSM & GPS With Fingerprint Verification
3. Body Movement Detection For Coma Patients Using Zigbee
4. Implementation Of Wireless Sensor Network For Real Time Overhead Tank Water Quality
Monitoring
5. Wireless Sensor Network Based Air Quality Monitoring System
6. WSN based Monitoring of Temperature and Humidity of Soil using Arduino
7. WSN and GSM Module based Automated IrrigationSystem
8. WSN based Wireless SCADA
9. Eyeball Controlled Automatic Wheelchair
10.Multi- Sensor Based Security Robot Using Zigbee
11.Sensor Based Brushless DC Motor Speed Control using Microcontroller
12. Remote Monitoring System Using XBEE
13.Design of a Low-Cost Contact- Less Digital Tachometer with Added Wireless Feature
14.Street Light Glow on Detecting Vehicle Movement using Sensor
15.Accident Prevention Using Eye Blinking and Head Movement
57. Disclaimer:
This document is confidential and intended solely for the educational purpose of RMK Group of
Educational Institutions. If you have received this document through email in error, please notify the
system manager. This document contains proprietary information and is intended only to the
respective group / learning community as intended. If you are not the addressee you should not
disseminate, distribute or copy through e-mail. Please notify the sender immediately by e-mail if you
have received this document by mistake and delete this document from your system. If you are not
the intended recipient you are notified that disclosing, copying, distributing or taking any action in
relianceon thecontentsof thisinformationisstrictly prohibited.
Thankyou
71