Вы находитесь на странице: 1из 72

Contents

Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I

1 Wireless Sensor Network 3


1.1 Main features . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1.1 Topologies . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1.2 Metrics and constrains . . . . . . . . . . . . . . . . . . 6
1.2 Systems Challenge . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4 Industrial application . . . . . . . . . . . . . . . . . . . . . . . 10
1.4.1 Physical network topology and logical topology . . . . 11
1.4.2 Traffic characteristic and metric . . . . . . . . . . . . . 12

2 IEEE 802.15.4 and ZigBee 14


2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2 IEEE 802.15.4 features . . . . . . . . . . . . . . . . . . . . . . 15
2.3 WPAN Device Architecture . . . . . . . . . . . . . . . . . . . 17
2.3.1 IEEE 802.15.4 PHY . . . . . . . . . . . . . . . . . . . 18
2.3.2 IEEE 802.15.4 MAC . . . . . . . . . . . . . . . . . . . 19
2.3.3 Data Transfer model . . . . . . . . . . . . . . . . . . . 23
2.4 ZigBee routing . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.5 ZigBee upper layers . . . . . . . . . . . . . . . . . . . . . . . . 26

3 Routing Protocol 28
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.2 The RPL protocol . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.3 DIO transmission and elegibility . . . . . . . . . . . . . . . . . 30
3.4 Data structure . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.5 Objective Code Points and Objective Function . . . . . . . . . 33
3.6 DAG discovery rules . . . . . . . . . . . . . . . . . . . . . . . 34

II
3.7 Candidate DAG Parent States and Stability . . . . . . . . . . 35
3.8 Forwarding . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

4 RPL Network Simulator 37


4.1 OMNeT++ . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.2 Mobility Framework . . . . . . . . . . . . . . . . . . . . . . . 38
4.2.1 Node´s structure . . . . . . . . . . . . . . . . . . . . . 39
4.2.2 Communication between layer . . . . . . . . . . . . . . 40
4.2.3 Network implementation . . . . . . . . . . . . . . . . . 42
4.2.4 Network parameters . . . . . . . . . . . . . . . . . . . 46
4.3 How to improve network stability . . . . . . . . . . . . . . . . 48

5 Results 50
5.1 Simultion results . . . . . . . . . . . . . . . . . . . . . . . . . 50

6 Conclusion 61

7 Appendix 63

III
List of Figures

1.1 A wireless sensor networks . . . . . . . . . . . . . . . . . . . . 3


1.2 Typical industrial topology . . . . . . . . . . . . . . . . . . . . 11

2.1 ZigBee protocol stack . . . . . . . . . . . . . . . . . . . . . . . 15


2.2 ZigBee network topology . . . . . . . . . . . . . . . . . . . . . 16
2.3 IEEE 802.15.4 architecture . . . . . . . . . . . . . . . . . . . . 18
2.4 IEEE 802.15.4 superframe . . . . . . . . . . . . . . . . . . . . 20
2.5 CSMA/CA algorithm . . . . . . . . . . . . . . . . . . . . . . . 22
2.6 Communication from a device to a coordinator in a nonbeacon-
enabled network . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.7 Communication from a device to a coordinator in a beacon-
enabled network . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.8 Communication from a coordinator to a device in a beacon-
enabled network . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.9 communication from a coordinator to a device in a nonbeacon-
enabled network . . . . . . . . . . . . . . . . . . . . . . . . . . 25

4.1 OMNet++ module concept . . . . . . . . . . . . . . . . . . . 37


4.2 Simulator node stack . . . . . . . . . . . . . . . . . . . . . . . 39

5.1 nodes connected by the DAG . . . . . . . . . . . . . . . . . . 51


5.2 Experimental latency without network DIO updating . . . . . 51
5.3 Busy channel probability . . . . . . . . . . . . . . . . . . . . . 52
5.4 Empirical latency without network DIO updating . . . . . . . 52
5.5 Analytical latency . . . . . . . . . . . . . . . . . . . . . . . . . 53
5.6 Best parent selected by the node number three . . . . . . . . . 54
5.7 Logical Topology variation . . . . . . . . . . . . . . . . . . . . 54
5.8 variation of α varying as a function of the traffic λ . . . . . . . 55

IV
5.9 Consequence of the second method to alleviate the jitter of
the latency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
5.10 Best parent selected by the node three . . . . . . . . . . . . . 56
5.11 latency end to end with and without the method to alleviate
the jitter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
5.12 consequence of the method on the mean and variance of the
latency end to end varying the packet rate generation . . . . . 58
5.13 consequence of the method on the mean and variance of the
latency end to end varying the number of nodes . . . . . . . . 59
5.14 consequence of the method on the mean and variance of the
latency end to end varying the transmitted power . . . . . . . 59

7.1 Markov chain model . . . . . . . . . . . . . . . . . . . . . . . 64

V
Chapter 1

Wireless Sensor Network

In this chapter we give an overview on wireless sensor networks (WSNs). The


basic characteristic are summarized with particular reference to the commu-
nication protocol stack and applications of WSNs [1]. A Wireless sensor
network is composed by a large number of devices called nodes, each node is
able to share information with other devices and can cooperate to perform
advanced, control, communication and signal processing tasks. Thanks to
the availability of low cost, low power, and miniature embedded processors,
radios, and sensors, integrated on a single chip, the use of WSN is increasing
very fast.

Figure 1.1: A wireless sensor networks

The individual devices in a wireless sensor network are inherently resource

3
CHAPTER 1. WIRELESS SENSOR NETWORK 4

constrained: they have limited processing speed, storage capacity, and com-
munication bandwidth, the node of a WSN must operate for long periods of
time. Since the node are wireless,minimizing the energy consumption is very
important. The node’s hardware components should be turned off most of
the time. Most of the circuits can be powered off, with a standby power of
about one microwatt. If such a device is active 1 percent of the time, its
average power consumption is just a few microwatts.
Each node in a sensor network is typically equipped with a radio transceiver
or other wireless communications devices, a small microcontroller, and an
energy source, usually a battery. In simple microcontrollers, miniaturization
increases efficiency rather than adding functionality, allowing them to oper-
ate near one milliwatt while running at about 10 Mhz. This scale of power
can be obtained in many ways. Solar cells generate about 10 milliwatts per
square centimeter outdoors and 10 to 100 microwatts per square centimeter
indoors. Mechanical sources of energy, such as the vibration of windows and
air conditioning ducts, can generate about 100 microwattslow-power micro-
processors have limited storage, typically less than 10 Kbytes of RAM for
data and less than 100 Kbytes of ROM for program storage-or about 10,000
times less storage capacity than a PC has. This limited amount of memory
consumes most of the chip area and much of the power budget. Designers
typically incorporate larger amounts of flash storage, perhaps a megabyte,
on a separate chip.
The sensor purpose is measuring changes over a certain range, when the
parameters measured change, there is a voltage variation that is translated
into a binary number that may be stored or processed, and is possible to
do this with few milliwant and turning on the device on a fraction of the
time. Hance the nodes are equipped with extremely efficient Analogical to
Digital Converters (ADCs) that have an energy profile similar to the proces-
sor. Once measures are converted into bits, the information is transmitted
to other nodes, The amount of energy required to communicate wirelessly
increases rapidly with distance, for these reasons for small devices to cover
long distances, the network must route the information hop by hop through
nodes.
However communicate with other nodes is the most expensive operation in
WNS. Each node has one or more sensing unit, and they can act as informa-
tion sources, sensing and collecting data samples from their environment. So
each sensor supports a multi-hop routing algorithm, creating multi-hop wire-
CHAPTER 1. WIRELESS SENSOR NETWORK 5

less networks that convey data samples to other sensor nodes. Nodes can also
act as information sinks, receiving dynamic configuration information from
other nodes or external entities. Between the various nodes that form the
network you can find base stations that are one or more distinguished compo-
nents of the WSN that are equipped with much more computational, energy
and communication resources.

1.1 Main features


The resources of a wireless sensor network, while there are many activities
such as sampling sensors, processing, and streaming data that need to be
served. The network must find the best path to route information from
source to destination. The nodes abstract the physical hardware, while the
network involves a more complex protocol stack divided into layers. The
lower levels deal with the radio channel and transmit data frames. The sec-
ond layer of the stack performs error coding and channel scheduling, as well
as detecting the arrival of incoming packets and processing them into input
buffers.
A typical application that runs at the top level of the protocol stack should
receive and process a stream of the sensor readings, and then deliver impor-
tant notifications to the network. A second component would receive this
notification messages, maintain a routing structure, and retransmit them
along the next hop in a route to a data collection gateway. So the informa-
tion moves hop by hop along a route from the point of production to the
point of use, each node has a radio that provides a set of communication
links to nearby nodes. Reading the information received, nodes can discover
their neighbors and create a routing algorithm according to the application’s
need. So it is important to determine connectivity variables to manage the
network and discover and adapt the network to the environment conditions.

1.1.1 Topologies
Wireless sensor networks can assume different topologies [2]. We can distin-
guish three main categories:
CHAPTER 1. WIRELESS SENSOR NETWORK 6

Star Network A single node is used as a base station, so it can send and
receive message from other nodes. This other nodes just send and re-
ceive a message from the base station and they cannot send messages to
each other. The advantage of this type of network is its simplicity, the
low power consumption and the low latency. The disadvantage of such
a network is that the base station must be within radio transmission
range of all the individual nodes, hence the network cannot be a large
one.

Mesh Network A mesh network allows for any node in the network to
transmit to any other node in the network that is within its radio
transmission range. A node wishing to send a message to a device
that is not in its communication range can use an intermediate node.
This network topology has the advantage of redundancy and scalability.
The network could be expanded adding other nodes, the disadvantage
of this type of network are the power consumption and the delivery
time increment.

Hybrid Star - Mesh Network In this network topology, the lowest power
sensor nodes are not enabled with the ability to forward messages. This
allows for minimal power consumption to be maintained. However,
other nodes on the network are enabled with multihop capability, al-
lowing them to forward messages from the low power nodes to other
nodes on the network. This is the topology implemented by the up and
coming mesh networking standard known as ZigBee.

1.1.2 Metrics and constrains


Now we explore the evaluation metrics that will be used to evaluate a wireless
sensor network, the most important metrics for wireless sensor networks are
lifetime, coverage, cost, response time, temporal accuracy, security, and effec-
tive sample rate, besides we have to consider that this metrics are correlated
to each other [3], [?]. Often it may be necessary to decrease performance in
one metric, such as sample rate, in order to increase another, such as life-
time. These metrics are used to describe the capabilities and performance of
a wireless sensor network.
It is essential that the nodes should operate for years so sensor nodes must be
low-power. If we use nodes in different scenarios, a wireless sensor network
CHAPTER 1. WIRELESS SENSOR NETWORK 7

architecture must be flexible enough to accommodate a wide range of appli-


cation behaviors. In order to support the lifetime requirements demanded,
each node must be constructed to be as robust as possible. In a typical de-
ployment, hundreds of nodes will have to work for years. To achieve this, the
system must be constructed so that it can tolerate and adapt to individual
node failure. Additionally, each node must be designed to be as robust as
possible. For these reasons we have also to establish set of metrics that will
be used to evaluate the performance of a device in a sensor network like com-
munication rate, power consumption, and range. The communication rate
also has a significant impact on node performance; higher communication
rates translate into the ability to achieve higher effective sampling rates and
lower network power consumption. The transmission range has a significant
impact on the minimal acceptable node density. If the nodes are positioned
too far, is impossible create a network with a enough redundancy to maintain
a high level of reliability. So metrics could be very important to extend the
lifetime oh a WSN.

1.2 Systems Challenge


As mentioned WNS resources are restricted and there are multiple current ac-
tivity such as sampling sensors, processing, and streaming data. The network
must find the best interconnection between nodes and routed information ef-
fectively from where it is produced to where it is used. Bur there are many
obstacles that decrement the network efficiency.

Node connection
The lowest layer of the protocol stack in WSN controls the physical radio
device. When one node transmits a signal, a set of other nodes can receive
the signal unless it is possible to distinguish it from other transmissions at
the same time. The link layer controls the channel and transmits only if the
channel is clear. If nodes don’t transmit, they sample the channel to research
special symbol at the start of a packet that allows the node to synchronize.
The packet layer manages buffers, schedules packets onto the radio, detects or
even corrects errors, handles packet losses, and dispatches packets to system
or application components.
CHAPTER 1. WIRELESS SENSOR NETWORK 8

Communication between nodes


Many protocols were developed to manage the connection between nods, one
of this is the flooding protocol in which a root node broadcasts a packet
with some identifying information. Receiving nodes retransmit the packet so
that more distant nodes can receive it. However, a node can receive different
versions of the same message from several neighboring nodes, so the network
uses the identifying information to detect and suppress duplicates. There are
many techniques to avoid connection and minimize redundant transmission.
One tool to determinate a route is dissemination. Each packet identifies the
transmitter and its distance from the root. To form a distributed tree, nodes
record the identity of a node closer to the root the network can use this
reverse communication tree for data collection by routing data back to the
root or for data aggregation by processing data at each level of the tree, nodes
may learn of potential parents by overhearing data messages. The network
continually collects statistics to reinforce the best routes. In WSN more
nodes participate in the communications, and the participants are identified
by attributes such as physical location or sensor value range. This style of
routing has been formulated as directed diffusion, a process in which nodes
express interest in data by attribute.

How reduce power


In this section some techniques are shown to minimize energy usage. As we
know, is possible to turn off the device to safe power end send data only
if there is a significant parameter’s variation. Another method is perform-
ing aggregation within the network, so it’s possible to reduce the amount
of information to transmit communication. In this case a single packet is
transmitted with a statistical summary of measurements made by the nodes
belonging to a sub-tree. Compression and scheduling also can conserve en-
ergy at lower layers. Sensor networks can avoid explicit protocol messages
by piggybacking control information on data messages and by overhearing
packets destined for other nodes. They can use prescheduled time to reduce
contention and the time the radio remains live.
CHAPTER 1. WIRELESS SENSOR NETWORK 9

1.3 Applications
Wireless Sensor Networks have revolutionized the world of distributed sys-
tems and have enabled several new applications. The applications for WSNs
are of several kind, but we can group them in two topologies. The first
class includes entity monitoring with limited signal processing requirements.
These applications want to gather information of a relatively simple form,
such as temperature and humidity, from the operating environment. The
other classes of applications require the processing and transportation of
large volumes of complex data. This class includes heavy industrial mon-
itoring and video surveillance, where complex signal processing algorithms
are usually employed. Below some of the most common applications are
described.
Area monitoring In area monitoring, the WSN is deployed over a region
where some phenomenon are to be monitored, for example for military
purpose; a large quantity of sensor nodes could be deployed over a bat-
tlefield to detect enemy intrusion. When the sensors detect the event
being monitored the event needs to be reported to one of the base
stations, which can take appropriate action. Depending on the ex-
act application, different objective functions will require different data-
propagation strategies, depending on things such as need for real-time
response, redundancy of the data.

Environmental monitoring Several WSNs have been deployed for envi-


ronmental monitoring, the vast spaces involved in such applications
requires large volumes of low cost sensor nodes that can be easily dis-
persed throughout the region, the nodes collect readings over time
across a volume of space large enough to exhibit significant internal
variation, for example WSNs are used to monitoring the microclimate
throughout the volume of redwood trees, helps form a sample of en-
tire forests. The nodes are used to monitor parameters like, humidity,
temperature and other environmental data.

Motion monitoring The analysis of structural response in, for example,


bridges, buildings, and airframes, places a further requirement to use
data collected at different points in the structure in spatial-temporal
analysis. This requires establishing a common, highly accurate time
frame across nodes. Nodes share time correlated raw or processed data
CHAPTER 1. WIRELESS SENSOR NETWORK 10

to perform the structural analysis. For example, the sensor data from
one instrument can be used as an input to a model-based analysis at
each of several other points in a structure and compared to sensor
data at those points. Researchers refine these models by using them
iteratively in normal circumstances to detect anomalies.

Habitat Study Such applications usually require the sensing and collect-
ing of bio-physical or biochemical information from the entities under
study, in many scenarios, habitat study requires relatively simple sig-
nal processing, such as data aggregation using minimum, maximum,
or average operations. The networks are designed to be equipped with
sensors for temperature, humidity, barometric pressure, and mid-range
infrared.

1.4 Industrial application


We have already seen how WSNs are used in many applications, but one of
the most important is in the industrial application [5]. Wireless, low power
field devices enable industrial users to significantly increase the amount of
information collected and the number of control points that can be remotely
managed. The wireless network needs to have three qualities: low power, high
reliability, and easy installation and maintenance and to achieve this goals
is important to choose a suitable routing protocol. The industrial market
classifies process applications into three broad categories and six classes.

1. Safety

• Class 0: Emergency action - Always a critical function

2. Control

• Class 1: Closed loop regulatory control - Often a critical


• Class 2: Closed loop supervisory control - Usually non critical
function
• Class 3: Open loop control - Operator takes action and controls
the actuator

3. Monitoring
CHAPTER 1. WIRELESS SENSOR NETWORK 11

• Class 4: Alerting, for this class is very important to have Short-


term operational effect
• Class 5: Logging and downloading or uploading

Industrial users are interested in deploying wireless networks for the mon-
itoring classes 4 and 5 and in the non-critical portions of classes 3 and 2.
Most low power and lossy network (LLN) systems in industrial automation
environments will be for low frequency data collection, sensors will have built-
in microprocessors that may detect alarm conditions that generate critical
alarm packets. Some devices will transmit a log file every day, so on this
kind of network there are alarm packets that are expected to be granted a
lower latency than periodic sensor data streams. Other devices will transmit
a log file every day, again with typically tens of Kbytes of data.

1.4.1 Physical network topology and logical topology


In the figure a typical physical network topology for industrial applications
is shown.
Actually there is no specific physical topology for an industrial process con-

Figure 1.2: Typical industrial topology

trol network, in one case a few hundred field devices are deployed to ensure
the global coverage using a wireless self-forming self-healing mesh network
that might be 5 to 10 hops across the backbone is many hops away. In the
opposite extreme case, the backbone network spans all the nodes and most
CHAPTER 1. WIRELESS SENSOR NETWORK 12

nodes are in direct sight of one or more backbone router. But in the most
common case there is a backbone that spans the Wireless Sensor Network
so that any WSN node is only a few wireless hops away from the near-
est Backbone Router. WSN nodes are expected to organize into self-forming
self-healing self-optimizing logical topologies that enable leveraging the back-
bone when it is most efficient to do so. Regarding the Logical Topologies for
security, reliability, availability or serviceability reasons, it is often required
that the logical topologies are not physically congruent over the radio net-
work that is they form logical partitions of the LLN.
The aim of the network is to build proactively a set of routes between the
sensors and one or more backbone router and maintain those routes at all
time. Also, because of the lossy nature of the network, the routing in place
should attempt to propose multiple paths in the form of Directed Acyclic
Graphs oriented towards the destination.

1.4.2 Traffic characteristic and metric


In industrial application we can distinguish in four large service categories:
Event data This category includes alarms and aperiodic data reports with
bursty data with bandwidth requirements. In certain cases, alarms are
critical and require a priority service from the network.

Client end Server In many case the industrial networks implement a com-
mand response protocol. The data bandwidth required is often bursty,
and the latency is based on the time to send tens of bytes over a 1200
baud link.

Bulk transfer Bulk transfers involve the transmission of blocks of data in


multiple packets where temporary resources are assigned to meet a
transaction time constraint.
The routing protocol must also support different metric types for each
link used to compute the path according to some objective function (e.g.
minimize latency) depending on the nature of the traffic.
For these reasons, the ROLL routing infrastructure has to compute and up-
date constrained routes on demand. Industrial application data flows be-
tween field devices are not necessarily symmetric. In particular, asymmet-
rical cost and unidirectional routes are common for data and alerts. The
CHAPTER 1. WIRELESS SENSOR NETWORK 13

routing protocol must be able to compute a set of unidirectional routes with


potentially different costs that are composed of one or more different paths.
For example it will find multiple paths towards a same destination, which
could be a node acting as a sink for the LLN.
In the next chapter, we study the IEEE 802.15.4 standard, which is one of
the most popular communication standards for WSNs
Chapter 2

IEEE 802.15.4 and ZigBee

In this chapter we study one of the most popular protocols for wireless sen-
sor network: the IEEE 802.15.4. This protocol specifies the physical layer
and the medium access control layer. The ZigBee alliance has specified an
extension to IEEE 802.15.4 by adding the networking and application layer
on top of the physical and MAC layer

2.1 Introduction
The ZigBee Alliance is an association of companies working together to spec-
ify a standard for networks composed by a large number of nodes. These
networks should be reliable an self configurable with very long battery life,
secure and they should have a low cost.

The ZigBee standards includes the IEEE 802.15.4 as the physical and
MAC layer and is trying to standardize higher level applications.
ZigBee offers basically four kinds of different services:

• Extra Encryption, services where application and network keys imple-


ment extra 128b Advanced Encryption Service (AES) encryption

• Association and authentication, which mean only valid nodes can join
to the network.

• Routing protocol: for example Ad hoc On Demand Distance Vector


(AODV), a reactive ad hoc protocol has been implemented to perform

14
CHAPTER 2. IEEE 802.15.4 AND ZIGBEE 15

APPLICATION / PROFILES

APPLICATION
FRAMEWORK
ZigBee
NETWORK / SECURITY
LAYER

MAC LAYER
IEEE
802.15.4
PHY LAYER

Figure 2.1: ZigBee protocol stack

the data routing and forwarding process to any node in the network.

• Application Services: An abstract concept called ”cluster” is intro-


duced. Each node belongs to a predefined cluster and can take a pre-
defined number of actions.
Hence ZigBee is used to organize the network. A node to join the network
has to ask to the coordinator for a network address, as part of the association
process. All the information in the network is routed using this address and
not the MAC address. In this step authentication and encryption procedures
are performed.
Once a node has joined to the network can send information to its brothers
through the routers which are always awake waiting for the packets. When
the router gets the packet and the destination is in its radio of signal, the
router first looks if the destination end device is awake or sleeping. In the
first case the router sends the packet to the end device, however if it is sleep-
ing, the router will bufferize the packet until the end device node gets awake
and ask for news to the router.

2.2 IEEE 802.15.4 features


IEEE 802.15.4-2006 is a standard which specifies the physical and the MAC
layer, they are designed for a low cost and low-power wireless network for
CHAPTER 2. IEEE 802.15.4 AND ZIGBEE 16

monitoring and control applications [6]. This standard is used especially in


Wireless Personal Area Network (WPAN) and this network is composed by
two types of device, Full Function Device (FFD) that can operate as a Per-
sonal Area Network PAN coordinator, coordinator or a device and Reduced-
Function Device (RFD) which are predisposed for simple application that
not require an high rate.
This two kind of node are use to create three topology.

Figure 2.2: ZigBee network topology

Star Topology
In this topology there is a single PAN coordinator that communicates with
other nodes, After an FFD is activated for the first time, it may establish its
own network and become the PAN coordinator, the network chooses a PAN
identifier, which is not used by any other network whereby can communicate
so each star network can operate independently.
CHAPTER 2. IEEE 802.15.4 AND ZIGBEE 17

Peer-To-Peer Topology
Any device can communicate with any other device if they are in range that
allow them to communicate, but also in this case there is only one PAN
coordinator. This topology allows multiple hops to route messages from any
device to any other device in the network and thank to multipath routing it
is very reliable.

Cluster Tree Topology


In this topology most devices are FFDs and an RFD may connect to a cluster-
tree network at the end of a branch. Any of the FFD could be a coordinator
that synchronizes the other devices and coordinators but only one is the
PAN coordinator which forms the firs cluster head (CLH) that is identified
with cluster identifier (CID) of zero. The PAN coordinator chooses a PAN
identifier and broadcast beacon frames to neighboring nodes. The devices
receive the bacon frame and will send a request to join in the network. If
the PAN coordinator accepts this new node, it will add the CLH as its
parent in its neighbor list and begin transmitting periodic beacons such that
other candidate devices may then join the network at that device besides the
PAN coordinator can instruct a device to become the CLH of a new cluster
adjacent to the first one.

2.3 WPAN Device Architecture


Now we see how a node is structured, the definition of the network layers is
based on the OSI model; although only the lower layers are defined in the
standard, interaction with upper layers is intended, possibly using a IEEE
802.2 logical link control sublayer accessing the MAC through a convergence
sublayer.
CHAPTER 2. IEEE 802.15.4 AND ZIGBEE 18

Upper Layers

802.2 LLC

SSCS

MAC

PHY

Pysical Medium

Figure 2.3: IEEE 802.15.4 architecture

Physical : contains the radio RF transceiver and its low-level control mech-
anism.
MAC : sublayer that provides access to the physical channel for all types of
transfer.
Network Layers : provides network configuration, manipulation, and mes-
sage routing.
Application Layer: provides the intended function of a device
In the following subsection analyze in more detail the physical and the MAC
layer

2.3.1 IEEE 802.15.4 PHY


The physical layer provides two services:

• The PHY data service: enables the transmission and reception of PHY
protocol data units (PPDU) across the physical radio channel.

• PHY management service: interfacing to the physical layer manage-


ment entity (PLME)

It is possible choose from two different type of PHY depending on the


frequency band.There is a single channel between 868 and 868.6MHz, 10
CHAPTER 2. IEEE 802.15.4 AND ZIGBEE 19

channels between 902.0 and 928.0MHz, and 16 channels between 2.4 and
2.4835GHz. Several channels in different frequency bands enables the abil-
ity to relocate within spectrum. The standard also allows dynamic channel
selection, a scan function that steps through a list of supported channels in
search of beacon, receiver energy detection, link quality indication, channel
switching.
This layer provides three important service:

Receiver Energy Detection (ED) that estimates the received signal power
within the bandwidth of an IEEE 802.15.4 channel, this parameter is
used to decide if a signal received could be accepted as a packet but
could be also used by network layer to establish a route.

The Link Quality Indication (LQI) it is a characterization of the strength


and/or quality of a received packet. The measurement may be imple-
mented using receiver ED, a signal-to-noise estimation or a combination
of these methods.

Clear Channel Assessment (CCA) is a logical function found which de-


termines the current state of use of a wireless medium in this standard
three method of CCA are implemented:

• Energy above threshold: the CCA reported that the channel is


busy if the ED it’s over a threshold.
• Carrier sense only: CCA shall report a busy medium only upon
the detection of a signal with the modulation and spreading. char-
acteristics of IEEE 802.15.4
• Carrier sense with energy above threshold. CCA shall report a
busy medium only upon the detection of a signal with the modu-
lation and spreading characteristics of IEEE 802.15.4 with energy
above the ED threshold.

2.3.2 IEEE 802.15.4 MAC


The MAC sublayer handles access to a shared medium and here provides
two services: the MAC management service interfacing to the MAC sub-
CHAPTER 2. IEEE 802.15.4 AND ZIGBEE 20

layer management entity (MLME) that is responsible for MAC management


trough a collection of primitives, and the MAC data service enables the trans-
mission and reception of MAC protocol data units (MPDU) across the PHY
data service.

SUPERFRAME STRUCTURE
In order to allow guaranteed time slots for low-latency applications and ap-
plications requiring a specific data bandwidth, IEEE 802.15.4 networks can
choose to synchronies their communication according to a superframe struc-
ture, its form is decided by the coordinator, but basically is divided into 16
equally sized slots and the bacon frame is set in the first slot of each super-
frame. The beacons are used to synchronize the attached devices, identify
the PAN and to describe the structure of superframe.

SO
aBaseSlotDuration × 2
beacon beacon
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

GTS GTS GTS Inactive period

CAP >= aMinCAPLength CFP

SD = aBaseSuperframeDuration × 2 SO

BO
BI = aBaseSuperframeDuration × 2

Figure 2.4: IEEE 802.15.4 superframe

In the superframe there is inactive portion, where the coordinator don’t in-
teract with its PAN and may enter a low-power mode. The active portion is
divided in :

• Contention access period (CAP),where any device to communicate dur-


ing the CAP uses a slotted CSMA-CA mechanism

• Contention free period (CPF) that is divided in guarantied time slots


(GTSs), The GTSs always appear at the end of the active superframe
following the CAP. The PAN coordinator may allocate up to seven of
these GTSs and a GTS can occupy more than one slot, period.
CHAPTER 2. IEEE 802.15.4 AND ZIGBEE 21

The beacon is transmitted at the beginning of slot 0 and the CAP starts
immediately after the beacon. All frame transmitted in the CAP shall use
slotted CSMA-CA to access the channel except acknowledgement or any
data frame that immediately follows the acknowledgement of a data request
command. A transmission in the CAP is complete one IFS period before the
end of the CAP, where IFS time is the amount of time necessary to process
the received packet by the PHY.
The CPF starts just after the CAP and extends to the end of the active
portion of the superframe. The length of the CFP is determined by the total
length of all of the combined GTSs, the transmission don’t use CSMA-CA
and they are complete one IFS period before the end of its GTS.
The CPF starts just after the CAP and extends to the end of the active
portion of the superframe. The length of the CFP is determined by the total
length of all of the combined GTSs, the transmission don’t use CSMA-CA
and they are complete one IFS period before the end of its GTS.

CSMA/CA Algorithm
Usually in PAN is used Slotted CSMA/CA but if beacons are not being used
in the PAN or a beacon cannot be located in a beacon-enabled network, is
used the unslotted csma-ca. In both cases, the algorithm is implemented
using units of time called backoff periods.
In slotted CSMA/CA the backoff period boundaries of every device in
the PAN are aligned with the superframe slot boundaries of the PAN coordi-
nator so when a device wants to transmit, can do it only in the next backoff
periods instead in unslotted CSMA/CA, the backoff periods of one device do
not need to be synchronized to the backoff periods of another device.

Each device has 3 variables:


• NB is the number of times that the CSMA/CA algorithm was required
to backoff while waiting the current transmission. It is initialized to 0
before every new transmission.

• CW is the contention window length, which defines the number of


backoff periods that need to be clear of activity before the transmission
can start. It is initialized to 2 before each transmission attempt and
reset to 2 each time if the channel is busy. CW is only used for slotted
CSMA/CA.
CHAPTER 2. IEEE 802.15.4 AND ZIGBEE 22

• BE is the backoff exponent, which is related to how many backoff pe-


riods a device shall wait before attempting to assess the channel.

The diagram showing the algorithm is shown in Figure.

NB=0, CW=2 Delay for


random (2^BE−1) Step 2
unitBackoffPeriods
Y
Battery life BE=min(2, Perform CCA on
extension? Step 3
aMinBE) backoff period boundary
N
BE=aMinBE Channel Y
idle?
Step 4 N Step 5
Locate Backoff Step1
period boundary CW=2, NB=NB+1, CW=CW−1
BE=min(BE+1,aMaxBE)

NB= N
N
macMaxCSMABackoffs CW=0

Y
Y
Transmission failure Success

Figure 2.5: CSMA/CA algorithm

Step 1: In slotted CSMA/CA, NB, CW and BE are inizializated and the


boundary of the next backoff is known.
Step 2: the MAC layer waits to complete backoff period with a casual delay
in the range 0 to 2BE - 1.
Step 3: the MAC layer request that PHY performs a CCA, than The MAC
sublayer proceed if the frame transmission, and any acknowledgement can
be completed before the end of the CAP.
Step 4: If the channel is assessed to be busy, the MAC sublayer shall in-
crement both NB and BE by one, ensuring that BE shall be no more than
aMaxBE. In slotted CSMA/CA, CW can also be reset to 2. If the value
CHAPTER 2. IEEE 802.15.4 AND ZIGBEE 23

of NB is less than or equal to macMaxCSMABackoffs, the CSMA/CA shall


return to step 2, else the CSMA-CA shall terminate with a Channel Access
Failure status.
Step 5: If the channel is assessed to be idle, in a slotted CSMA/CA, the
MAC sublayer shall ensure that contention window is expired before starting
transmission. For this, the MAC sublayer first decrements CW by one. If
CW is not equal to 0, go to step 3 else start transmission on the boundary
of the next backoff period. In the unslotted CSMA/CA, the MAC sublayer
starts a transmission immediately if the channel is assessed to be idle.

2.3.3 Data Transfer model


Here we examine the four types of data transfer transactions allowed in IEEE
802.15.4: When a device wishes to transfer data in a nonbeacon-enabled

Network
Coordinator
Device

Data

Acknowledgment
(optional)

Figure 2.6: Communication from a device to a coordinator in a nonbeacon-


enabled network

network, it simply transmits its data frame, using the unslotted CSMA/CA,
to the coordinator and the acknowledgement is optional.
CHAPTER 2. IEEE 802.15.4 AND ZIGBEE 24

Network
Coordinator
Device
Beacon

Data

Acknowledgment
(optional)

Figure 2.7: Communication from a device to a coordinator in a beacon-


enabled network

If a device wishes to transfer data to a coordinator in a beacon-enabled


network, it first listens for the network beacon. When the beacon is found,
it synchronizes to the superframe structure. At the right time, it transmits
its data frame, using slotted CSMA/CA, to the coordinator. There is an op-
tional acknowledgement at the end. The application transfer are controlled
by the device instead of coordinator to save power.

Network
Coordinator
Device
Beacon

Data request

Acknowledgment

Data

Acknowledgment

Figure 2.8: Communication from a coordinator to a device in a beacon-


enabled network

If a coordinator wishes to transfer data to a device in a beacon-enabled


network, it indicates in the network beacon that the data message is pend-
ing. The device periodically listens to the network beacon, and if a mes-
sage is pending, transmits a MAC command requesting this data, using
CHAPTER 2. IEEE 802.15.4 AND ZIGBEE 25

slotted CSMA/CA. The coordinator optionally acknowledges the success-


ful transmission of this packet, pending data frame is then sent using slotted
CSMA/CA. The device acknowledged the successful reception of the data
by transmitting an acknowledgement frame. Upon receiving the acknowl-
edgement, the message is removed from the list of pending messages in the
beacon.

Network
Coordinator
Device

Data request

Acknowledgment

Data

Acknowledgment

Figure 2.9: communication from a coordinator to a device in a nonbeacon-


enabled network

Finally if a coordinator wishes to transfer data to a device in a nonbeacon-


enabled network, it stores the data for the appropriate device and wait for
a request data. A device send a MAC command requesting the data, using
unslotted CSMA/CA, to its coordinator at an application-defined rate. The
coordinator acknowledges this packet. If data are pending, the coordinator
transmits the data frame using unslotted CSMA/CA. If data are not pend-
ing, the coordinator transmits a data frame with a zero-length payload to
indicate that there aren’t data.

2.4 ZigBee routing


The ZigBee routing algorithm is based on the notion of ”Distance Vector”
routing, in which each ZigBee router maintains a routing table entry for the
route from a particular source to a particular destination [7]. This entry
records both a ”logical distance” to the destination and the address of the
next router in the path to that destination.
Routes are established on-demand using a route discovery process in which
CHAPTER 2. IEEE 802.15.4 AND ZIGBEE 26

the source device broadcasts a route request command and the destination
device sends back a route reply. After that the node can send packet on this
route.
In most wireless network applications, there is a distinguished device, often
called an ”aggregator”, to which every other device in the network must send
data on a regular basis. In order to prevent every device in the network from
having to discover the aggregator separately, the ZigBee PRO feature set
provides a special case of route discovery, in which a single route request
broadcast from the aggregator establishes an entry with the aggregator as a
destination in the routing tables of every router in the network.
The ZigBee introduces another kind of routing, known as source routing.
Whereas in distance vector routing the routing information is stored in rout-
ing tables in the devices that participate in relaying the frame, source routing
puts the routing information in the frame itself. Thus only the originator of
the frame, in this case the aggregator, needs to maintain an entry for the
route, but this routing table entry needs to store the entire path from the
aggregator to the destination. ZigBee PRO uses a route record command,
sent from the intended destination back to the aggregator, to record the path.
Thereafter, data frames may be sent along that path using source routing.

2.5 ZigBee upper layers


In this paragraph we describe briefly the stack above the network layer.

Application layer
As we know the application layer is the highest-level layer of a protocol stack,
and defines the interface of the ZigBee system to its end users. It is composed
by two main components.
The ZigBee Device Object (ZDO) that establishs the role of a device as either
coordinator or end device but also for the discovery of new one-hop devices
on the network and the identification of the service that the node offers. It
may establish also secure links with external devices and reply to binding
requests accordingly.
The application support sublayer (APS) works as a bridge between the net-
work layer and the other components of the application layer: it keeps up-
to-date binding tables in the form of a database, which can be used to find
CHAPTER 2. IEEE 802.15.4 AND ZIGBEE 27

appropriate devices depending on the services that are needed and those the
different devices offer. It can also routes messages across the layers of the
protocol stack.

Security services
ZigBee provides facilities to create secure communications, protecting es-
tablishment and transport of cryptographic keys, ciphering frames and con-
trolling devices. It builds on the basic security framework defined in IEEE
802.15.4. This part of the architecture relies on the correct management
of symmetric keys and the correct implementation of methods and security
policies.
Chapter 3

Routing Protocol

In this Chapter the basic functionality of the RPL are illustrated, especially
how this protocol build up and manage the DAG

3.1 Introduction
Low power and Lossy Networks (LLNs) are a class of networks in where
nodes are constrained: LLN routers typically operate with constraints on
processing power, memory and energy [8]. These routers are interconnected
by lossy links, typically supporting only low data rates, that are usually
unstable with relatively low packet delivery rates. A network may run mul-
tiple instances of RPL concurrently. Each instance may serve different and
potentially antagonistic constraints but in this work we describe how a sin-
gle instance operates. The aim of RPL is build up Directed Acyclic Graph
(DAG). Where a directed graph has the property that all edges are oriented
in such a way that no cycles exist, all edges are contained in paths oriented
toward one or more root nodes. One of the most important factor that in-
fluence the routing protocol are the metric, in fact RPL is a distance vector
routing protocol.
The maintenance information is added as an option - called DAG information
option (DIO) - that is periodically sent. The DIO also contains a Objective
Code Point (OCP) field, which determines what metrics and functions should
be used for a node to determine its height to build routing solution which fa-
vors energy-efficiency, latency, bandwidth, or a compound metric. The DIO
includes a measure derived from the position of the node within the DAG,

28
CHAPTER 3. ROUTING PROTOCOL 29

the rank, which is used for nodes to determine their positions relative to each
other and to inform loop avoidance/detection procedures. In practice this
routing algorithm exchanges DIO messages to establish and maintain routes.
RPL proposes a mechanism to allow data to flow from the sink outwards,
which can be necessary for controlling actuators, and reconfiguration. Along
with sending convergecast data, nodes in RPL transmit Destination Adver-
tisement Option (DAO) packet which records the sequence of nodes traversed
before reaching the sink. The sink then uses source routing to send data to
arbitrary nodes in the network, hence the node can communicate with the
sink and vice-vera. The DAG root may advertise a routing constraint used
as a ”filter” to prune links and nodes that do not satisfy specific properties,
a routing metric is a quantitative value that is used to evaluate the path
quality, also referred to as the path cost. The best path is the path with the
lowest cost with respect to some metrics that satisfies all constraints (if any)
and is also called the shortest constrained path.

3.2 The RPL protocol


The DAG starts from a node that is defined as the DAG root, ad it can act as
a collection point for the data. If this node offers connectivity to an external
infrastructure such as the public Internet or IP network the DAG is called
Grounded otherwise it is called Floating.
From this nodes starts the setting up of the DAG employing a DIO, the other
nodes that received this message can decide if the DAG root is a potential
parent using the information contained in the OCP.
A node is able to decide if a emitting node could be considered as a potential
parent. After performing the validation test, the node can decide to add the
emitting node in the list of DAG parents, if so the node is joined in the DAG
specified by its DAGID. After the entrance in the DAG the nodes emits a
multicast DIO message in to expand the route outward from the DAG root.
Meanwhile the nodes can listen for the DIO message from other nodes to
expand the set of parents and augment the number of available paths.

The DIO conveys the following option:


CHAPTER 3. ROUTING PROTOCOL 30

• A DAGID used to identify the DAG as sourced from the DAG root,
the DAGID must be unique to a single DAG in the scope of the LLN,
a Rank information used by nodes to determine their positions in the
DAG and also a Sequence number originated from the DAG root, used
to coordinate topology changes to avoid loops.

• Objective Code Point (OCP) as described below that contains informa-


tion about the set of metrics, the objective function use to determinate
the least coast path, the function used to compute DAG Rank and
the functions used to accumulate metrics for propagation within a DIO
message

• Indications and configuration for the DAG, e.g. grounded or floating,


administrative preference.

• A vector of path metrics.

The decision for a node to join a DAG may be optimized according to


implementation specific policy functions on the node as indicated by one or
more specific OCP values.
For example, a node may be configured to optimize a bandwidth metric
(OCP-1), and with a parallel goal to optimize for a reliability metric (OCP-
2). So there are two DAGs, with two unique DAGIDs: DAG-1 would be
optimized according to OCP-1, whereas DAG-2 would be optimized accord-
ing to OCP-2. A node may then maintain independent sets of DAG parents
and related data structures for each DAG. Note that in such a case traffic
may directed along the appropriate constrained DAG based on traffic mark-
ing within the IPv6 header.

3.3 DIO transmission and elegibility


The transmission of the DIO is essential for the construction of the DAG,
for this reason this section is explained whenever a node sends this kind
of packet. The DIO sending is regulated by a trickle timer operating over
a variable interval. The governing parameters of this timer are configured
across the DAG and in addition to the periodic multicast message the LLN
sends the DIO to respond a Router solicitation.
In the implementation of the simulation a DIO is sent periodically to update
CHAPTER 3. ROUTING PROTOCOL 31

the network as we described in the next chapter. We make this assumption


because for the moment we are not taking into account the detection of in-
consistency and other causes which lead the variation of the trickle timer.
When an DIO message is received from a source device, the receiving node
must first determine whether or not the DIO message should be accepted for
further processing. If the DIO message is malformed the packet should be
silently discarded. Otherwise if the source device is a member of the candi-
date neighbor set, then the DIO is eligible for further processing.

If the node has sent a DIO message within the risk window then a collision
has occurred so the node do not process the DIO message. As DIO messages
are received from candidate neighbors, the neighbors can decide to choose
the advertising node as DAG parents by following the rules of DAG discovery
as will be described in the next section. When a node places a neighbor into
the DAG Parent set, the node becomes attached to the DAG through the
new parent node.

3.4 Data structure


In this paragraph the data structure employed by the RPL to built up the
DAG are expleined. subsectionCandidate neighbor data structure Each node
has a data structure where nodes discovered by the Neighbors Discovery are
allocated. In this structure is possible to find the metrics related to candidate
neighbors, and is very important that the metric are stable and reliable. It
is also possible to bound the number of entries, so to determine which nodes
have to remain in the data structure there is a confidence value to order the
neighbors. subsectionDAGs data structure
For each DAG that a node is a member of, the implementation must keep
a DAG table with the following entries:

• DAGID

• DAG Objective Code Point

• A set of Destination Prefixes offered inwards along the DAG

• A set of candidate DAG parents

• A timer to govern the sending of DIO messages for the DAG


CHAPTER 3. ROUTING PROTOCOL 32

• DAG Sequence Number

If a node wants to join in a DAG for which no DAG data structure is instan-
tiated, then the DAG data structure is instantiated. When the candidate
DAG parent set become empty, it must be deplete and then the DAG data
structure should be suppressed after the expiration of a local timer.

Candidate parents structure


The nodes to enter in the DAG have to select the candidate DAG parents
according to the request that are contained in the OCP.
This data structure must keep a record of:

• A reference to the neighboring device which is the DAG parent

• A record of most recent information taken from the DAG Option last
processed from the candidate DAG parent

• A state associated with the role of the candidate as a potential DAG


parent that will be further described

• A DAG Hop Timer, if instantiated

• A Held-Down Timer, if instantiated

DAG parents
The set of DAG parents is composed by the subset of candidate DAG parents
in the ‘Current’ state.
The DAG parents are ordered according to the OCP so the protocol can
identify the preferred parents and the other DAG parents must have a rank
less than or equal to that of the most preferred DAG parent.
When nodes are added to or removed from the DAG parent set, there is the
possibility that the preferred parents change so the preferred parents should
be revaluated. Any nodes having a rank greater than self after such a change
must be placed in the Held-Down state and evicted.
CHAPTER 3. ROUTING PROTOCOL 33

3.5 Objective Code Points and Objective Func-


tion
The OCP serves to convey and control the optimization objectives used
within the DAG. The OCP contains the OF that is used to order the DAG
parents and the OF is also responsible to compute the rank of the device
within the DAG.
The OFs is fundamental for the parent selection and it will be enabled
each time an event indicates that the a potential next hop information has
changed.

If the state of candidate neighbor changes, the preferred parents could


change so the parent selection is triggered, the OF scans all the interfaces
on the device and decide if one interface is more preferred than another the
OF can also exclude an interface if not match a required criterion for an
Objective Function.
Than the OF scans all the candidate neighbors on the selected interfaces to
determine whether they are suitable as an attachment router for a DAG.

The OF computes self’s rank by adding the step of rank to that candidate
to the rank of that candidate. The step of rank might vary from 1 to 16 can
is estimated as follows:

1 indicates a unusually good link.

4 indicates a ‘normal’ typical link.

16 indicates a link that can hardly be used to forward any packet.

After the scanning of all the candidate neighbors, the OF selects the cur-
rent best parents and self’s rank is computed as the preferred parent rank
plus the step in rank with that parent. Other rounds of scans might be nec-
essary to elect alternate parents and siblings.
CHAPTER 3. ROUTING PROTOCOL 34

3.6 DAG discovery rules


DAG discovery locates the nearest sink and forms a Directed Acyclic Graph
towards that sink, by identifying a set of DAG parents, it specifies also a set
of rules to be followed to avoid loop and maintain a safe DAG structure.
For this purpose the DAG Rank is very important. After selecting the pre-
ferred parents, a node is able to compute its Rank using the metrics conveyed
by the most preferred parent, the node own metric and a related function
defined by OCP.
A node that does not have any DAG parents in a DAG is the root of its own
floating DAG. Its rank is 1. This situation could happen when the node loses
all of its current feasible parents. In that case, the node should remember
the DAGID and the sequence counter in the DIO of the lost parents for a
period of time which covers multiple DIO.
Whether the LLN node is attached to an infrastructure that does not sup-
port DIO, this is the DAG root of its own grounded DAG with rank 1. This
node sends the first DIO to begin the DAG construction.
After that a device is joined in the DAG and has computed its rank, it may
move in its DAG without increasing its own DAG Rank otherwise the node
has to leave the DAG. For this reason nodes must ignore DIOs that are re-
ceived from other routers located deeper within the same DAG.
A node that has selected a new set of DAG parents but has not moved yet,
because it is waiting for DAG Hop timer to elapse, become unstable and
doesn’t send DIO. When a node wants to jump from its current DAG into
any different DAG has also to wait for a DAG Hop timer. This allows the
new higher parts of the DAG to move first, thus allowing stepped DAG re-
configurations and limiting relative movements but a node should not join a
previous DAG, identified by its DAGID, unless the sequence number in the
DIO has incremented since the node left that DAG.
If a node receives a DIO from one of its DAG parents, and if the parent
contains a different DAGID, indicating that the parent has left the DAG,
and if the node can remain in the current DAG through an alternate DAG
parent, then the node should remove the DAG parent which has joined the
new DAG from its DAG parent set and remain in the original DAG.
When a node detects or causes a DAG inconsistency, then the node sends an
unsolicited Router Advertisement message to its one-hop neighbors. The RA
contains a DIO that propagates the new DAG information. Such an event
CHAPTER 3. ROUTING PROTOCOL 35

will also cause the trickle timer governing the periodic DIOs to be reset.
If a DAG parent increases its rank but the node does not wish to follow,
the DAG parent should be evicted from the DAG parent set. If the DAG
parent is the last in the DAG parent set, then the node may chose to follow it.

3.7 Candidate DAG Parent States and Sta-


bility
Using the DIO message a node can build up a set of DAG parents from which
chooses the parents depending on runtime conditions. The following states
for the candidate parents are defined:
CURRENT The candidate parent is in the set of DAG parents and may
be used for forwarding traffic inward along the DAG.

HELD-UP A new DAG is discovered upon a router advertisement message,


the node gets in the DAG by selecting the source of the RA message
as a DAG parent and propagating the DIO accordingly. Before the
node jumps in the new DAG become unstable, in this phase the node
continues to listen the DIOs message from other node to discover other
candidate parents, and a new DAG Hop timer starts for all of this. The
first timer that elapses for a given new DAG clears them all for that
DAG, allowing the node to jump to the highest position available in
the new DAG.

HELD-DOWN When a neighboring node is ’removed’ from the Default


Router List, it is actually held down for a hold down timer period. A
node that is held down is not considered for the purpose of forwarding
traffic inward along the DAG toward the root. When the hold down
timer elapses, the node is removed from the Default Router List.

COLLISION A race condition occurs if 2 nodes send DIO at the same time
and then attempt to join each other In order to detect the situation,
LLN Nodes time stamp the sending of DIO, than a risk windows is
defined and any DIO received in this period introduces a risk.
So a node will forward its packet only if the parents is in a current state.
Since in our network we want to build only a DAG, for a node there is not
CHAPTER 3. ROUTING PROTOCOL 36

possibility to jump in an another DAG, so we takes care about the collision


state to avoid the DIO collision, but we implement a kind of DAG hop timer
to permit to a node to join in the DAG in the higher position. The held-down
state is not considerate because in this work there is not possibility for the
node to detach from the DAG.

3.8 Forwarding
After a node joins in a DAG, whether is a sensing node, it can stars to sends
its measures in a packet addressed to the sink. The node has to add the
sink address as final destination and have to indicate the next hop address.
The next hop address will be the best parent address, minimizing the OF
value. The packet will be forwarded by intermediate nodes choosing the
corresponding best parent until the packet reaches the sink.
In the next chapter, we present a implementation of these procedure by using
Omnet++
Chapter 4

RPL Network Simulator

This is the main Chapter of the thesis. Here we explain how the simula-
tor is implemented. We explained the software used, the main step of the
simulation and the parameters chosen.

4.1 OMNeT++
OMNeT++ is an object-oriented modular discrete event simulator. The fea-
ture of the OMNeT++ is the capability to create modules which could com-
municate each other through messages. One module contains one or more
sub-modules each of which could contain other sub-modules. The modules
are distinguished among simple or compound [9].

NETWORK

COMPOUND SIMPLE

Figure 4.1: OMNet++ module concept

A simple module is associated with a C++ file that supplies the desired be-

37
CHAPTER 4. RPL NETWORK SIMULATOR 38

haviors that encapsulate algorithms. Compound modules are aggregates of


simple modules and are not directly associated with a C++ file that supplies
theirs behaviors. Modules communicate by exchanging messages that repre-
sent frames or packets in a network. Each message may be a complex data
structure and they may be exchanged directly between simple modules or via
a series of gates and connections, there is also the possibility to implement
self-messages which are used by a module to schedule events at a later time.
Each layer of the sensor node is represented as a Simple Module of OM-
NeT++. The layers communicate with each other through gates and each
of the layers has a reference to the Coordinator.
These Simple Modules are connected according to the layered architecture
of a Sensor Node. The different layers of the Sensor Node have gates to the
other layers of the Sensor Node to form the Sensor Node stack.
In OMNeT++, NED language is used to describe the structure of a simu-
lation model, NED stands for Network Description and NEDs files lets the
user declare simple modules, and connect and assemble them into compound
modules. The user can label some compound modules as networks, self-
contained simulation models. Channels are another component type, whose
instances can also be used in compound modules.
Through the NED files is possible define modules parameters but they can
be assigned also with configuration file omnetpp.ini.

To build up a simulation we need:


• NED language topology description (.ned files) which describe the mod-
ule structure with parameters, gates etc.

• Message definitions (.msg files). You can define various message types
and add data fields to them. OMNeT++ will translate message defi-
nitions in C++ classes.

• Simple modules sources. They are C++ files, with .h/.cc suffix.

• A .ini file to specified parameters

4.2 Mobility Framework


We need a framework to support simulations of wireless and mobile networks
within OMNeT++, and for our work we choose the Mobility Framework [10].
CHAPTER 4. RPL NETWORK SIMULATOR 39

The core framework implements the support for node mobility, dynamic con-
nection management and a wireless channel model and provides basic mod-
ules that can be derived in order to implement own modules.
One of the main features of MF is the ChannelControl module that controls
and maintains all potential connections between the nodes. An OMNeT++
connection link in the MF does not mean that the corresponding hosts are
able to communicate with each other. A host will receive every data packet
that its transceiver is potentially able to sense. The physical layer then has
to decide dependent on the received signal strength whether the data packet
will be processed or whether it will be treated as noise.

4.2.1 Node´s structure


We use MF because the 802.15.4 standard is implemented in the framework.
In this framework the node is composed by the modules shown in figure.

APP

NET

RADIO

MAC

DECIDER

SNR

Figure 4.2: Simulator node stack

A NIC module is a network interface that includes physical layer functions


CHAPTER 4. RPL NETWORK SIMULATOR 40

like transmitting, receiving, modulation as well as medium access mecha-


nisms. The NIC module in the MF therefore is divided into two simple
modules:

• A physical layer like part (snrEval and decider) and a MAC layer
(macLayer). The snrEval module can be used to compute some snr
information for a received message whereas the decider module can
process this information to decide if a message got lost, if it has bit
errors or if is correctly received.

• A medium access control (MAC) protocol coordinates actions over a


shared channel. In our MAC layer implementation we decided to use
un-slotted CSMA-CA and non-beaconed mode (also called beaconless).
At a given point of the simulation as we shall see, nodes that will receive
a packet should send an acknowledgment, to send this ack the node will
use the CSMA algorithm to avoid collision.

The delay that we will obtain to send the ack will be our latency.
MAC layer decide when a simple radio being able to send or transmit on one
channel. The interface with the MAC layer is simple: the MAC can ask the
radio one of three commands (ENTER SLEEP, ENTER RX and ENTER
TX). Internally, the radio has 7 states, and three of them are steady. So the
MAC layer only cares about steady states and the internal transient states
are used to model power consumption.
In the Network layer, we have to implement a routing algorithm, as men-
tioned we want to use the RPL to find a route toward a sink node, and its
implementation is described in the next section. Finally in Application layer
where is possible define when and which packet should be sent, with which
kind of traffic the node will generate the packets.

4.2.2 Communication between layer


The handle*Msg() functions, are called each time a corresponding message
arrives and contains all necessary processing and forwarding information.
There are three different functions to handle three different kinds of message:
CHAPTER 4. RPL NETWORK SIMULATOR 41

HandleSelfMsg using self messages is possible implement timers in OM-


NeT++. HandleSelfMsg() is the message to handle all timer related
things and to initiate actions upon timeouts.

HandleUpperMsg This function is called every time a message has arrived


from an upper layer. After processing the message can be forwarded
to the lower layers with the sendDown() function, if necessary.

HandleLowerMsg For messages from lower layers it is the other way around.
After being processed they have to be forwarded to upper layers if nec-
essary. This is done by using the sendUp() function which also takes
care of decapsulation.

Convenience Functions are special funcion defined to facilitate common


interfaces and to hide inevitable interface management from the user. the
MF provides three different functions:

EncapsMsg This function is called right after a message has arrived from
the upper layers. It is responsible for encapsulation of the message
into appropriate message layer. After this the message is passed to the
handleUpperMsg() function.

SendUp is the function to be called if a message should be forwarded to


upper layers and is usually called within handleLowerMsg(). It decap-
sulates the message before sending it.

SendDown Sending messages to underlying layer is done with the send-


Down() function. Sometimes it may be necessary to provide or process
also additional meta information here.

The Message Concept


In order to provide basic functionality like encapsulating and decapsulating
messages in the Basic* modules we need to have fixed message formats for
every layer. The provided message types have the most important fields
needed for the corresponding layer. These message types with these fields
are obligatory and can only be extended but not exchanged. Here is a list of
all base message classes and their parameters
CHAPTER 4. RPL NETWORK SIMULATOR 42

• For the physical Layer, it is used AirFrame message and contains the
following parameters: sending power, channel used to send message, id
of the originator to get the position time needed to send the message
on the channel, control Information class used to pass Snr information
to the decider

• Medium Access Control Message is MacPkt where we can find: destina-


tion MAC address, source MAC address and the channel for forwarding
the message.

• NetwPkt is the message used in the network layer and in its field con-
tains: destination network address, source network address, sequence
number, time to life and the control Information class used to tell the
MAC protocol the address of the next hop.

• Finally the ApplPkt is used to set destination application address and


source application address in the network layer.

However the framework allows to implement own message or modify an


existing message. For example to implement the protocol we need to add the
information about the Rank in the network layer, hence we should modify
the NetwPkt.

4.2.3 Network implementation


Below are listed the step to implement the simulation of the network.

Phase 1
Now decide the network topology, thus establish the nodes number and their
position through the .ini file. In the network there is a single root node that
works as a sink and collects packets from all the other nodes. We use a peer-
to-peer topology where there is a PAN coordinator, as in the star topology,
but is different because any devices can communicate with the other if they
are in the communication range.
CHAPTER 4. RPL NETWORK SIMULATOR 43

In our network we have only one root and the latency is the only metric
used (see the previous chapter), for this reason there is only one DAGID.
Thus we don’t make use of a DAGID in our software implementation. The
metric used to build the DAG and obtain the rank is the latency, so we have
to find first a method to compute the latency and get an initial value to
perform the algorithm to construct the DAG. We assume that nodes send a
broadcast packet and this packet is used to discover children, when a child
receives a message, the device can recognize the senders as a possible parent
and should send an acknowledgment to inform the sending node that has
receive the packet.
We establish that the node to send the ack should employ the CSMA algo-
rithm, and this ack is used to estimate the latency to send a packet from the
child to the parent. The latency is given by the time between the instant in
which the node wants to send the ack message and the moment in which it is
sent, hence we have to consider the backoff periods and CCA timer that the
protocol uses to avoid collision. We consider the packet transmission time
negligible.
In the simulation each node sends five broadcast packets, nodes that receive
the broadcast packet (that will be possible children of the sending nodes)
have to replay sending an ack and collect the latency value, at the end of
the process a node should calculate the average latency to obtain a more
accurate value. Besides, since we want the latency value calculated on the
packets received, we have to derive the probability that a packet reaches the
destination node, and consider it in the metric formulation.
According to the RPL standard we have to quantize the latency to obtain a
metric value, called step of rank. Will be used to compute the rank of the
entering nodes and to find the OF value to select the best parent. This value
is calculated at the MAC layer and is stored in a matrix, that we call Latency
Matrix, where on the rows there is the node that sends the ack and on the
columns the receiving node. This matrix will be stored in a text file, to be
used later at the network layer to decide the best parent. In a real situation
the OF value is stored in the node that should decide the parent to forward
the packet.
The process to obtain these initial values of the latency starts from the root,
which sends the number of broadcast packet established and each node that
received this message, may enter later in the network choosing a node as a
parent. When the root ceases to send its broadcast packets, only the nodes,
CHAPTER 4. RPL NETWORK SIMULATOR 44

that have received a packet, are able to send their packet to discover other
nodes that could enter in the network. At the end of this phase each node
knows its neighbor and the latency to send packets to its parent. How many
packet of this kind we have to send, the sending rate and the traffic type are
decided at the application layer ned file.

Phase 2
When the discovering phase is finished the second step starts where we want
to form the DAG. The process starts from the root node that sends the first
DIO packet. We should add the necessary information changing the message
used in the network layer (NetwPkt) of Mobility Framework, especially we
should add the rank information, as we know this information is essential to
establish the position of the node into the DAG to avoid loop. According to
the protocol the root rank is 0 end the device will add this information in
the packet in the network layer.
Suppose that a node receives a DIO. The node add himself to the DAG, and
compute the value of the OF of all the candidate parents node. The parent
node with the lower OF value will be selected as a best parent and the node
that will enter in the DAG choosing its best parent will calculate its rank as
the candidate parents rank plus the step of rank. Every time a node receives
a DIO, each node will update its routing table with the address of the best
parent. When a node receives a DIO it will chooses a parent and update its
rank. Than this node will inform the other nodes about these changes by
sending its own DIO. As mentioned in Chapter 3, nodes will retransmit the
DIO packet to update continuously the DAG, in our case we simplify the
DIO transmission method and assume that it is sent with a constant interval
time.
In this way every node, after that has received a DIO packet and has joined
to the DAG, starts to send traffic packet to its best parent. This packet will
be forwarded to the corresponding best parent from the receiving node, until
the packet will reach the sink node. For this reason each node will add the
sink address as final destination address and the best parent address as next
hop address. When a node will receive a packet it will check if the packet
is addressed to itself, otherwise will search on its routing table the next hop
address to reach the destination. Clearly in many applications there may be
CHAPTER 4. RPL NETWORK SIMULATOR 45

multiple final destinations,which means having more sink nodes.

Latency estimation
During the transmission of the traffic packets, the latency estimation keep
running using the same method of the first phase, the purpose is to see if
there is a variation of the latency caused by the cross-traffic packets that
could influence the routing state, changing the route to achieve the sink.
Each node collects a certain number of latency value, compute the mean of
the latency and update the latency matrix. Whenever a node will retransmit
the DIO, the receiving node update the OF value and if is necessary changes
the best parent, thus finding a path that is less expensive.
In this work we use two methods to derive the latency. The first way is
experimental and it takes into account the probability that the packet is
correctly received. Since we know how many packets a node sends and if the
packet is received, this probability is easy to find, so the latency is given by
the following relation .
1
step of rank = experimental latency
probability of correct reception
Then we use an empirical mode to verify the results obtained. The latency
is given by the following equation [12]:
[ NB ( i ) ]
∑ ∑ Wk + 1 αi (1 − α)
D= + L rs ,
i=0 k=0
2 1 − α N B+1

where rs is the slot duration, α is the busy channel probability, NB the maxi-
mum number of backoff, Wi is the delay window and it is initially W0 = 2BEmin
and doubled any stage until Wi = Wmax = 2BEmax , (BEmax − BEmin ) ≤ i ≤
N B and L denotes the packet transmission duration measured in slots. This
model is empirical since α is obtained experimentally using the simulator.
Every node knows how many times it tries to attempt to access the channel
and the number of times that it was found busy, so it easy derive this prob-
ability. How to obtain the equation above is explained in the appendix. To
have another proof of the measures we are going to compute α using only the
analytical model. Knowing when a packet is sent by the node, we can com-
pute the value of X the denotes the time duration to wait before the next
transmission attempt. Using matlab simulation we derived the analytical
value of α to obtain the mean latency for the sent packet.
CHAPTER 4. RPL NETWORK SIMULATOR 46

4.2.4 Network parameters


In this section we shown how to set up the parameters of the network.

Information about traffic


Our simulation is implemented in three steps In the first we want to find
the metrics to compute the node’s rank and build up the DAG according
to the RPL rules. We have 8 nodes and the node do not move, the node 0
is the sink node and each node after that is entered in the network, has to
send 5 packets to discover other nodes obtaining also the latency as described
in the previous paragraph. Nodes generate broadcast packets, to schedule
this packet the simulator will choose a random value uniformly distributed
between 0 and 2.
When all the nodes in the network have sent their five packets the sink node
starts the second phase sending the firs DIO packet, thus each node sends
periodically the DIO every 0.5 seconds to update the DAG periodically. After
a node sends its first DIO packet, since it entered the DAG, starts to send
traffic packet towards its best parents. The traffic model used is uniform
also in this case, and the interval chosen for the traffic is between 0,0012
and 0,075 seconds. Is not possible choose 0 as lower bound, because it could
happens that the node will schedule the sending of the next packet before
the node has sent the previous packet. The result would be finding the node
busy and inserting the packet its queue, in this way there is the risk to have
many packets in the queue and to lose it. After the node sends the traffic
packet, we collect the latency value in a vector and every 1 second, when
this timer is expired we compute average of the vector updating the metric
in the latency matrix. The simulation ends when all the nodes have sent all
the DIO and the traffic packet.

Channel parameters
The ChannelControll is the Module to control the channel and is the central
module that coordinates the connections between all nodes, and handles dy-
namic gate creation. In this module we can set this four parameters pMax,
sat, α, and carrier frequency, where pMax is maximum sending power used
for this network (in mW), sat is the minimum signal attenuation threshold
(in dBm), alpha is minimum path loss coefficient and carrier frequency is
CHAPTER 4. RPL NETWORK SIMULATOR 47

minimum carrier frequency of the channel (in Hz). This value are used to
calculate the interference distance between nodes, meaning the distance at
which a node can still disturb the communication of a neighbor so this dis-
tance depends mainly on the channel model in use. For our simulation the
following values are chosen.

Channel parameters
pMax 110.11 mW
sat -120 db
α 2
carrier frequency 869 Mhz

MAC layer parameters


The latency at the MAC layer, especially the one caused by the CSMA
algorithm to avoid the packet collision, is higly influenced by the MAC pa-
rameters specified in the ned file. In this file we can set parameters related
to the CSMA algorithm but also parameters concerning the network imple-
mentation. The values are showed in the table.

MAC parameters
Level at which we consider medium busy -97 dBm
Length of MAC header 72 bit
MAC queue maximum number of packets in Tx buffer 100
Bit rate 250000 bps
Complete MAC ack message length 40 bit
Minimum backoff exponent 3
Maximum backoff exponent 8
Backoff exponent 5
Maximum number of frame retransmission 3
Base unit for all backoff calculations 0.00032 s
Channel Assessment detection time 0.000128 s
CHAPTER 4. RPL NETWORK SIMULATOR 48

4.3 How to improve network stability


In the next chapter we show that a node changes its best parent continuously
while the network is operating. This is due to latency variations that causes
a change in the route from the sensing node to the sink [13], [14]. However,
the new route may affect the future values of the latency in a negative way.
The consequence is that there may be unnecessary route change.
In this thesis we propose a method to avoid useless variation of the logical
topology, by adding reason a depth information in the nodes. Depths are
assigned in such a way that they increase with distance to a central node.
Distance is calculated by using a cumulative cost function which can be based
on the hop count. The sink node issues a DIO with this information set to 0.
When nodes receive this message, increment this counter by one, set height
and relay the message to neighbors, at the end of this process the deep of a
node indicates how many times a message is transmitted by a node before it
reaches the sink. This information could be useful to make a prediction about
the cross traffic that will be generated around the node while the network is
working.
Then we want to find an experimental relation between the busy channel
probability α and the traffic λ that a node can listen. To obtain this relation
we have to count the number of packets that a node receives including packets
that are not addressed to itself, and we would like to observe the variation
of α. Since we know the deep of each node, we can estimate the number of
children for each node and derive the traffic that the node is going to forward.
Thus each node will know the expected traffic λ b in its communication range
and obtain the value of the estimated busy channel probability α b using the
experimental relation that connect this two parameters. We would like to
b in the analytical formulation of the latency to compute a
use this value of α
value of the delay considering the cross-traffic. Using this value as a metric to
perform RPL we want to avoid useless variation of the best parent improving
also the stability of the traffic.
The disadvantage of the proposed method is the complexity of the procedure
to compute the metric. For this reason we propose and implement also
a second method to avoid harmful variation of the best parent. Here the
latency to perform the routing algorithm is given by the mean between the
current experimental value and the previous. We introduce memory with the
purpose to alleviate the jittering of the latency using a more accurate value.
CHAPTER 4. RPL NETWORK SIMULATOR 49

Since the latency is less fluctuant, the nodes have less chance to change the
best parent. In this way we try to improve stability of the paths to reach
the sink. If the path is more stable, the value of the latency end-to-end will
suffer less variations and will be more accurate. A stable route implies also
a more uniform traffic load improving the routing stability. Furthermore
this solution is very simple to implement on the node because it does not
require heavy computational operations, that we want to avoid considering
the limited processing speed of the nodes.
Chapter 5

Results

In this chapter the result obtained by our simulator described in the previous
chapter are reported.

5.1 Simultion results


The fist phase of the simulation is to get the metric for the DAG construction,
in the previous chapter we have seen how to calculate the metric per each
link and we know that it is stored in a matrix. For our network, we report
below the metric  
0 6 3 0 0 0 0 0
 
 3 0 4 3 0 3 0 0 
 
 4 5 0 8 0 0 0 0 
 
 0 4 3 0 6 3 5 4 
 
 0 0 4 3 0 0 0 4 .
 
 0 0 0 4 0 0 3 0 
 
 
 0 4 0 3 0 4 0 3 
0 10 0 5 2 0 5 0
These values are refereed to the network showed in Fig 5.1. On the rows
of the matrix there is the node that sends the ack, and on the columns the
receiving node. The values of the elements of the matrix are the steps of rank
that the nodes use to compute the OF value to select the best parent. This
matrix gives also information about the neighbors of a node. To perform
the analysis, we consider node number three since, being at the center of the
network, it has a high number of neighbors. That sink is subject to a high

50
CHAPTER 5. RESULTS 51

traffic. The node can choose among the parents with a lower rank and will
choose the node with the minimum OF’s value.
All the node in the network will do the same, and at the end of this process
we have a DAG.

Host 0
(sink node)

Host 1

Host 5

Host 2
Host 3

Host 6

Host 4

Host 7

Figure 5.1: nodes connected by the DAG

In Fig. 5.2, we shown the simulation based latency and we assume that the
network will not be updated by the DIO packets. Therefore each node sends
only one DIO packet to form the DAG and will send the traffic packet on
the same link. The purpose is to study the variation of the latency.

experimental latency
0.01

0.009

0.008

0.007

0.006
latency

0.005

0.004

0.003

0.002

0.001

0
660 670 680 690 700 710 720 730 740
simulation time

Figure 5.2: Experimental latency without network DIO updating


CHAPTER 5. RESULTS 52

We can see from the graph that the latency on the same link may have high
variations. This latency is computed by considering the probability that
the packet is correctly received. However, to validate the results we should
compare Fig. 5.4 with the latency obtained thanks to the empirical model.
For these reasons we have to derive the busy channel probability α for node
3.

busy channel probability


1

0.9

0.8

0.7

0.6
alpha

0.5

0.4

0.3

0.2

0.1

0
669 670 671 672 673
simulation time

Figure 5.3: Busy channel probability

Using the value of α reported in Figure 5.3 we obtain the empirical latency.

empirical latency
0.01

0.009

0.008

0.007

0.006
latency

0.005

0.004

0.003

0.002

0.001

0
660 670 680 690 700 710 720 730 740
simulation time

Figure 5.4: Empirical latency without network DIO updating

By comparing the two temporal traces we can see a good agreement between
CHAPTER 5. RESULTS 53

the simulation based and the empirical models. To be sure we should compare
graphs obtained experimentally with a completely theoretical analysis.

Figure 5.5: Analytical latency

In this case the plots do not match. This happens because in the analytical
formulation it was assumed that the value of the busy channel is the same
for all the links that a node may have. But actually this value is different
for each link since the corresponding node has a different traffic that must
be forwarded to the root, thus the value of α is different from node to node.
However the model can be used for the empirical method as the value of
the busy channel probability is computed experimentally which takes into
account that the traffic in the network is not homogenous.
From the Fig. 5.2 it is easy to see that the latency floats on the same link,
the floating trend is caused by the cross-traffic that now is forwarded in the
network by all the nodes. Hence it may be that another link could be better
than the link that we are using, so the node should change the best parent.
When the network will send more than one DIO the network is updated in
order to minimize the OF.
CHAPTER 5. RESULTS 54

While the network is working we continue monitoring the node number


three and we see from this graph that this device changes its best parent
continuously.

node three best parents


8

5
best parents

0
660 670 680 690 700 710 720 730 740
simulation time

Figure 5.6: Best parent selected by the node number three

Which implies that the node three changes the route to reach the sink. This
happens for all the node in the network and for this reason the the random
latency introduced by the CSMA/CA may determine time-varying delays,
which lead to routes with unstable delay.

Host 0
(sink node)

Host 1

Host 5

Host 2
Host 3

Host 6

Host 4

Host 7

Figure 5.7: Logical Topology variation

In the following, we introduce the proposed methods to reduce the fluctua-


CHAPTER 5. RESULTS 55

tion in the best parents’ choice. As mentioned the first step of this method is
estimate how the busy channel probability changes depending on the traffic
generated from the nodes in the communication range to estimate values of
the latency that consider the cross-traffic.

0.4

0.38

0.36

0.34
busy channel probability

0.32

0.3

0.28

0.26

0.24

0.22

0.2
150 200 250 300 350 400 450
traffic (lambda)

Figure 5.8: variation of α varying as a function of the traffic λ

The continuous line shows the average value of α varying the traffic consid-
ering all the nodes into the network, and the blue stars represent the value
of α that each node can assume. Unfortunately, from this graph we see that
is not possible to obtain an accurate experimental relation between α and
λ. This happens since the value of the busy channel probability depends not
only on the traffic from the node in its communication range, but also on
the number of nodes that are in their paths. The traffic into the network
is not homogenous, thus the value of α is not accurate enough to make a
useful estimation of the busy channel probability depending on the traffic.
For these reasons the only way to use this method is to derive an analytical
relation using to estimate the value of α depending on the traffic.
CHAPTER 5. RESULTS 56

We investigate what happens employing the second method to alleviate


the jittering of the latency. As mentioned the step of rank to compute the
OF’s value is given by the mean between the current value of the latency
and the previous one. In this way we introduce memory when nodes have
to decide the best parent. We can see that the latency for the node three
assumes a more stable value. Analyzing the best parents selection we find

experimental latency without jittering alleviation experimental latency with jittering alleviation
0.014 0.014

0.012 0.012

0.01 0.01

0.008 0.008
latency

latency

0.006 0.006

0.004 0.004

0.002 0.002

0 0
660 670 680 690 700 710 720 730 660 670 680 690 700 710 720 730
simulation time simulation time

Figure 5.9: Consequence of the second method to alleviate the jitter of the
latency

that there is an initial phase of instability. This happens because the values
of the latency that the protocol use to build up the initial DAG, do not take
into account the presence of the cross traffic generated from the other nodes
in the network. After a transient the node three chooses the node one as a
best parent selection
8

5
best parents

0
660 670 680 690 700 710 720 730 740
simulation time

Figure 5.10: Best parent selected by the node three

best parent to forward the traffic. After the initial phase of instability the
CHAPTER 5. RESULTS 57

node changes the parent only in one case. This variation of the best parent
happens since the channel model that we use in the simulation is not ideal.
The node will select another node to forward the information if the channel
condition are not convenient for the transmission. For this reason we can con-
clude that now the route path is more stable implying a more homogeneous
traffic load in the network. The nodes will use the same parent to forward
the traffic to the sink, avoiding continuous logical topology variations that
can bring instability.
The instability of the best parent selection bring to an variable value of the
end to end latency that is harmful for control application. Hance we in-
vestigate the latency end to end with and without the solution proposed to
alleviate the jitter considering the node 7. This node is one of the most
distant from the sink so the latency end to end for its packets is the more
variant. From the plots we can see that the end to end latency takes more

0.07 0.07

0.06 0.06

0.05 0.05
latency end to end

latency end to end

0.04 0.04

0.03 0.03

0.02 0.02

0.01 0.01

0 0
660 680 700 720 740 760 780 660 680 700 720 740 760 780
simulation time simulation time

Figure 5.11: latency end to end with and without the method to alleviate
the jitter

variants values if we don’t make use of the method. To validate this assump-
tion we compute mean and variance of end to end latency for all the node in
the network, changing same network parameters as packet generation rate,
nodes and transmitted power.
CHAPTER 5. RESULTS 58

First of all we compute the mean and variance of the latency changing
the number of packet generated by the node in one second.

−3 −6
x 10 x 10
11 8

10.5
7

10

latency end to end variance


average latency end to end

6
9.5

9 5

8.5 4

8
3
7.5

2
7

6.5 1
10 15 20 25 10 15 20 25
number of packet sent in one second number of packet sent in one second

Figure 5.12: consequence of the method on the mean and variance of the
latency end to end varying the packet rate generation

In the Fig 5.12 the blue line represents the measure obtained without the
solution to reduce the variation of the best parent while the red one shows
the value obtained using the method. From the trace we can see that the
mean and the variance of the end to end latency grows by increasing the
number of packet sent. This happens because in the network there are more
collisions, so the number of lost packets increases and consequently the value
the latency.
We can see that using the proposed solution the average latency decreases.
The method avoids to use parents that only in a first time appear more
convenient because of the traffic variation. Considering the variance we can
observe the greatest benefits of the method. The graph shows clearly that
there is a good reduction of variance, and this reduction increases with the
number of packet forwarded in the network. This happens because using the
solution there are less variations of the best parent and the latency end to
end assumes a more stable value. This is very useful especially in control
application, where the sink node should know the time to reach the node in
the network to make the appropriate corrections in critical situations.
CHAPTER 5. RESULTS 59

We want to verify that the solution brings benefits also by varying the
number of the nodes.

−6
x 10
0.015 11

10
0.014
9
0.013

variance latency end to end


average latency end to end

8
0.012
7

0.011 6

5
0.01
4
0.009
3
0.008
2

0.007 1
7 7.5 8 8.5 9 9.5 10 7 7.5 8 8.5 9 9.5 10
number of nodes number of node

Figure 5.13: consequence of the method on the mean and variance of the
latency end to end varying the number of nodes

In this case we can see that there are reductions for the mean and variance
of the end to end latency. The data shows that the benefit increases slightly
augmenting the number of the nodes. But we have to consider that we are
not using a large number of nodes to simplify the analysis of the simulation.
If we use a network with many nodes, the benefits would be greater taking
into account the trend of the plot.
Than we want to prove that the solution proposed works also changing the
transmitted power.

−6
x 10
0.013 10

9
0.012

8
variance latency end to end
average latency end to end

0.011
7

0.01 6

5
0.009

0.008
3

0.007 2
4 5 6 7 8 9 10 4 5 6 7 8 9 10
transmitted power (mw) −3
x 10 transmitted power (mw) −3
x 10

Figure 5.14: consequence of the method on the mean and variance of the
latency end to end varying the transmitted power
CHAPTER 5. RESULTS 60

In this case we can see that the trend of the plots are not so regular as in
the previous figure. This happens because increasing the transmitted power,
each node increases the number of available parents. Hence, there are more
available paths to reach the sink node, so the latency end to end has more
variation, especially if in the simulations we do not use the method to reduce
the switch of the best parent. In fact we can see from the Fig 5.14 that the
red trace has a more regular trend and in some situations present a significant
advantage not only in terms of variance but also considering the mean of the
end to end latency.
Chapter 6

Conclusion

In this thesis the main purpose was to study the routing protocol RPL over
IEEE 802.15.4. For this reason an Omnet++ was used to implement RPL.
Using the simulation we observed how the MAC and the network layer are
strictly connected. The delay introduced by the CSMA/CA algorithm to
avoid the packet collision influences the routing decision. Hence, when a
routing protocol is developed, the MAC layer should be considered.
We can also see how the use of the latency as a metric influences the routing
stability. The routing paths are not stable and the delay changes whenever
a node in the route changes its best parent.
This variation is caused by the jitter of the latency. This jitter is due to
the packets that are forwarded by the node in the network that make a non
homogeneous traffic distribution. The continuous change of the best parent
aggravates this situation. For this reason, we introduced two methods to
alleviate the jitter.
In the first one we estimated the value of the latency taking into account
the presence of the cross traffic, obtaining matric values more realistic. We
tried to avoid the switching of the best parent to have a more stable traf-
fic load.Howerver ,to implement this method a simple analytical relation to
estimate the value of the busy channel probability depending on the traffic
needs yet to be found.
By the second method we introduced a memory in the delay calculation. The
latency to perform the algorithms is given by the mean between the current
value of the latency and the previous one. In this way the latency has more
stable values that implies less best parent changing and a more homogeneous
traffic. From the result we can see that this solution worked satisfactorily.

61
CHAPTER 6. CONCLUSION 62

To complete this work it would be useful:

1. To continue the study on the first method proposed to prove its validity

2. Make Monte Carlo analysis of the two solutions to verify that they
works for all the networks topology

3. An implementation the RPL protocol on a real sensor network to study


the differences among the real model and the simulation results.
Chapter 7

Appendix

Analysis of Unslotted IEEE 802.15.4


In the appendix is proposed an analysis of the unslotted IEEE 802.15.4 ran-
dom access. The goal of the analysis is to derive expression for the packet
delivery. The analysis requires finding a set of equations that define the
optimal network operating point uniquely. We give details in the sequel.
In the unslotted IEEE 802.15.4 carrier sense multiple access with colli-
sion avoidance (CSMA/CA) mechanism, each device in the network has two
variables: N B and BE. N B is the number of times the CSMA/CA algo-
rithm is required to backoff while attempting the current transmission. N B
is initialized to 0 before every new transmission. BE is the backoff exponent,
which is related to how many backoff periods a device must wait before it
attempts to assess the channel. The algorithm is implemented using units of
time called backoff periods. The parameters that affect the random backoff
are BEmin , BEmax and N Bmax , which correspond to the minimum and max-
imum of BE and the maximum of N B respectively.
The unslotted CSMA/CA mechanism works as follows. N B and BE are
initialized to 0 and BEmin respectively (Step1). The MAC layer delays for a
random number of complete back-off periods in the range 0 to 2BE − 1 (step
2) and then requests PHY to perform a CCA (clear channel assessment)
(step 3). If the channel is assessed to be busy (step 4), the MAC sub-layer
increments both N B and BE by one, ensuring that BE is not more than
BEmax . If the value of N B is less than or equal to N Bmax , the CSMA/CA
must return to Step 2, else the CSMA/CA must terminate with a Channel-
Access-Failure status. If the channel is assessed to be idle (Step 5), the MAC

63
CHAPTER 7. APPENDIX 64

layer starts transmission immediately.


The analysis is developed in two steps. In a first time there is the study of
the behavior of a single device by using a Markov model. From such a model,

-1,L-1 -2,0

-2,X
.

1
-1,0 1/W0

0,0 1 0,1 1 0,2 . . . 0,W 0-2 1 0,W0-1

1/W1

1/W 1 1/W1

1,0 1 1,1 1 1,2 . . . 1,W 1-2 1 1,W1-1

Ps Pf

. . . . . . . . . . . . . . . . . .

1/W m

m,0 1 m,1 1 m,2 . . . m,W m-2 1 m,Wm-1

Figure 7.1: Markov chain model

we obtain the stationary probability ϕ that the device attempts its carrier
channel assessment (CCA).Than, is coupled the per user Markov chains to
obtain an additional set of equations that give the CCA assessments of other
users. The solution of such a set of equations provides with the per user ϕ
and probability of free channel assessment.
First is develop the Markov model to determine ϕ. Let c(t) be the stochas-
tic process representing the counter for random delay and packet transmis-
sion duration. The integer time t corresponds to the beginning of the slot
times. Let α be the probability of assessing channel busy during CCA. Next,
when entering the transmission state, L slots should be counted, where L
denotes the packet transmission duration measured in slots. Let X denote
the time duration to wait before the next transmission attempts measured
in slots. Let s(t) be the stochastic process representing the delay line stages
representing the number of times the channel is sensed busy before packet
CHAPTER 7. APPENDIX 65

transmission (s(t) ∈ {0, · · · , N B}), or the transmission stage (s(t) = −1) at


time t. The states (s(t) = −2) in Fig. 7.1 model unsaturated periodic traffic.
It is assumed that the probability to start sensing is constant and indepen-
dent of all other devices and of the number of retransmissions suffered. With
this assumption, {s(t), c(t)} is the two-dimensional Markov chain of Fig. 7.1
with the following transition probabilities:

P {i, k|i, k + 1} = 1, k ≥ 0 (7.1)


1−α
P {0, k|i, 0} = , i < NB (7.2)
W0
α
P {i, k|i − 1, 0} = , i ≤ N B, k ≤ Wi − 1 (7.3)
Wi
1
P {0, k|N B, 0} = . (7.4)
W0
In these equations, the delay window Wi is initially W0 = 2BEmin and dou-
bled any stage until Wi = Wmax = 2BEmax , (BEmax − BEmin ) ≤ i ≤ N B.
Equation 7.1 is the condition to decrement the delay line counter per slot.
Equation 7.2 states that it is only possible to enter the first delay line from
a stage that is not the last one (i < N B) after sensing the channel idle and
hence transmitting a packet. Equation 7.3 gives the probability that there
is a failure on channel assessment and the station selects a state in the next
delay level. Equation 7.4 gives the probability of starting a new transmis-
sion attempt when leaving the last delay line, following a successful or failed
packet transmission attempt.
Denote the Markov chain steady-state probabilities by bi,k = P {(s(t), c(t)) =
(i, k)}, for i ∈ {−1, N B} and k ∈ {0, max(L − 1, Wi − 1)}. Using Equa-
tion 7.3,it is obained

bi−1,0 α = bi,0 , 0 < i ≤ N B,

which leads to
bi,0 = [α]i b0,0 , 0 < i ≤ N B.
From Equations (7.1)– (7.4) we obtain
[ ]
Wi − k ∑
NB
bi,k = (1 − α) bj,0 + αbN B,0 for i = 0,
Wi j=0

Wi − k
bi,k = bi,0 , for i > 0.
Wi
CHAPTER 7. APPENDIX 66

Since the probabilities must sum to 1, it follows that

∑ ∑
NB W i −1 ∑
L−1 ∑
X−1
1 = bi,k + b−1,i + b−2,i
i=0 k=0 i=0 i=0

N B [ ]
Wi + 1
= bi,0 + (1 − α)L + (1 − α)X
i=0
2
+bN B,0 αX.

By substituting the expression for Wi , we obtain


{
b0,0 1 − αN B+1
1 = [1 + 2(1 − α)(L + X)]
2 1−α
N B+1 dif f BE α dif f BE+1
− αN B+1
+2Xα +2 W0
1−α
}
1 − (2α)dif f BE+1
+W0 (7.5)
1 − 2α

where dif f BE = BEmax − BEmin . The transmission failure probability Pf


is
Pf = bN B,0 α, (7.6)
and the probability that a node starts to transmit is

τ = Ps = ϕ(1 − α),

in which

NB
1 − αN B+1
ϕ = ϕ1 = bi,0 = b0,0 . (7.7)
i=0
1−α
It is now derived one expression for ϕ from the per user Markov models.
By determining the interactions between users on the medium, than will be
derived expressions for α. Assume that there are N nodes in the network.
Denote by M (s) = −1 the event that there is at least one transmission in
the medium by another node and assume that, without loss of generality,
the sensing node is iN , which is denoted as S iN (c) = −1 if S i (s) = −1 is the
event that node i is transmitting. Then, the probability that a node sensing
the channel finds it occupied is α = Pr(M (s) = −1|S iN (c) = −1), which is
CHAPTER 7. APPENDIX 67

computed as follows

α = Pr(M (s) = −1|S iN (c) = −1)



N −2 ( ) (n+1 ∩
)
N −1
= Pr S ik (s) = −1|S iN (c) = −1
n=0
n + 1 k=1
(
∑ N −1
−2 )
N
( )
= Pr S i1 (s) = −1
n=0
n+1
(n+1 )

× Pr S ik (s) = −1|S i1 (s) = −1, S iN (c) = −1 . (7.8)
k=2

The probability that node i1 is transmitting is

Pr(S i1 (s) = −1) = (L + 1)Ps = (L + 1)ϕ(1 − α), (7.9)

which requires the node to sense (with probability ϕ) before transmission


and the following slot to be empty (with probability (1 − α)). It is (L + 1)
instead of L due to the misalignment in the slots of the nodes i1 and iN in
the unslotted 802.15.4 protocol.
To express the conditional probability in terms of ϕ, the transmission
pattern needs to be understood: If there was no difference between sensing
the channel and starting the transmission, then in the unslotted case no two
nodes would be transmitting simultaneously since the probability that two
nodes start sensing simultaneously in the continuous case is zero. However,
since there is a finite time between channel sensing and starting transmission,
we assume that in the worst case, if two or more nodes start sensing in
the same slot (slots are considered the same if the difference between their
starting time is minimal), even if they are misaligned, the transmissions start
at the same slot. The conditional probability is hence equivalent to
(n+1 )

Pr S ik (s) = −1 | S i1 (s) = −1, S iN (c) = −1
k=2
(n+1 )

= Pr S ik (c) = −1 | S i1 (c) = −1, S iN (c) = −1 . (7.10)
k=2

Since it was assumed that the probability ϕ to sense in a given slot is inde-
CHAPTER 7. APPENDIX 68

pendent across nodes, we can easily see that this is


(n+1 )

Pr S ik (c) = −1 | S i1 (c) = −1, S iN (c) = −1
k=2

= ϕ (1 − ϕ)N −2−n ,
n
(7.11)

which requires nodes i2 , ..., in+1 to sense and the remaining N − 2 − n nodes
not to sense in the sensing slot of i1 . As a result,

α = (L + 1)[1 − (1 − ϕ)N −1 ](1 − α). (7.12)

From this, it is possile derive a second expression for ϕ:


[ ] N1−1
α
ϕ2 = 1 − 1 − .
(L + 1)(1 − α)

The network operating point as determined by ϕ and α is given by solving


the two non-linear Equations (7.7), (7.12).
Now it is possible to give the expression of the latency. The average delay
for the node, given that the packet is successfully transmitted, is given as
follows:

[ NB ( i ) ]
∑ ∑ Wk + 1 αi (1 − α)
D= + L rs ,
i=0 k=0
2 1 − α N B+1

Where rs is the slot duration.

Вам также может понравиться