2. get replies. Frank and Karl [6] rely on AODV [5], Wu and topology changes determine only local modifications
Zitterbart [13] use DSR [7]. These routing protocols (as of the directory structure.
many others [10]) use flooding to set up paths to destina- • Distribute the knowledge on adjacent clusters among
tions. Flooding limits the scalability of the routing proto- cluster members. The knowledge on adjacent clusters
cols and consequently, of the service discovery protocols should be distributed among the ordinary nodes. Only
that rely on them. the root needs to know all the nearby clusters.
DHT based peer-to-peer techniques have been proposed
for service discovery in ad-hoc networks [1], due to their
4 Clustering algorithm
efficient lookup mechanism. However, this approach gen- 4.1 Network model
erates considerable network traffic and a high maintenance
We model a wireless network as an undirected graph
overhead, so it is not suitable for the WSN environment.
G = (V, E), where V is the set of nodes and E is the set
Kozat and Tassiulas [8] build a dominating set (or back-
of links that directly connect two nodes. Two nodes u and
bone) to which devices register their services. Due to the
v are neighbors if there is a direct communication channel
high density of nodes in the backbone (the dominating set is
between u and v. Each node is assigned (1) a unique hard-
not independent), lots of loops are generated when a service
ware identifier, termed the address of the node, and (2) a
discovery message travels the backbone nodes. To over-
weight, termed the capability grade, representing an esti-
come this drawback, the backbone organizes in a source-
mate of the node’s dynamics and available resources. The
based multicast tree. However, building and maintaining
higher the capability grade, the more suitable is the node for
two overlays for the same purpose (the dominating set and
the clusterhead role. We make the following assumptions:
the multicast tree) is expensive.
• The capability grades are unique, as the node hardware
3 Design considerations identifier may be used to break ties.
• The lower layers (such as MAC) filter out asymmetri-
In this section we discuss from the design perspective
cal links, so that we can rely on bidirectional commu-
several techniques for reducing the communication cost
nication.
during (1) discovery of services and (2) maintenance of the
• A node is aware of its neighbors and their capability
distributed directory.
grades.
Our service discovery protocol uses an underlying clus- • The lower layers (such as transport) provide a reliable,
tering structure, where the clusterheads (or root nodes) form best-effort message delivery service.
a distributed directory of service descriptions. During the
Our clustering structure is a forest composed of a set of
discovery process, messages are exchanged among the clus-
disjoint trees or clusters. The height of the cluster is the
terhead nodes. Therefore, the design issue for minimizing
longest path from the root node to a leaf. We say that two
the discovery cost is that the root nodes have to be sparsely
trees are adjacent if there are two nodes, one from each tree,
distributed on the deployment area. The clustering algo-
that are connected through a link.
rithm should construct an independent set of clusterheads,
Given a node v, we use the following notation:
i.e. two root nodes are not allowed to be neighbors.
In the following, we give the design considerations for • p(v) is the parent of v
minimizing the communication cost during the maintenance • r(v) is the root (or clusterhead) of the cluster of v
of the distributed directory: • Γ(v) is the open neighborhood of v, Γ(v) = {u ∈
V | (u, v) ∈ E}
• Make decisions based on 1-hop neighborhood infor- • Γ+ (v) is the closed neighborhood of v, Γ+ (v) =
mation. Clustering algorithms that require each node Γ(v) ∪ {v}
to have complete topology knowledge over a number • ∆(v) is the set of children of node v, ∆(v) = {u ∈
of hops are expensive with regard to the maintenance V | p(u) = v}
cost. We aim to build a lightweight clustering structure • Ru (v) is the set of adjacent clusters of node v, repre-
that requires only the 1-hop neighborhood topology in- sented by their roots, that can be reached through node
formation. u, where u ∈ Γ(v):
• Avoid chain reactions. Several clustering algorithms
[2] suffer from the chain reaction problem, where a – if u ∈ ∆(v), then Ru (v) is the set of root nodes
single topology change in the network may trigger sig- of clusters adjacent to the sub-tree rooted at v
nificant changes in clustering structure. For a distrib- – if u ∈ Γ(v) ∆(v) and r(u) = r(v), then
uted directory composed of clusterhead nodes, a chain Ru (v) = {r(u)} {r(v)}
reaction leads to high overhead for maintaining consis- • S(v) is the set of services provided by node v
tent service registries. Therefore, an energy-efficient • Su (v) is the set of services registered to v by u ∈
solution should avoid chain reactions, such that local ∆(v).
932
Authorized licensed use limited to: Annamalai University. Downloaded on August 8, 2009 at 02:40 from IEEE Xplore. Restrictions apply.
3. Algorithm 1 Clustering algorithm - node v (events/actions)
Initialization: // Parent is chosen
1. r(v) ← ⊥; Rm (v) ← ∅, ∀m ∈ Γ(v)
2. choose p(v) ∈ Γ+ (v) such that c(p(v)) = max{c(m) | m ∈
Γ+ (v)}
3. if p(v) = v then
4. r(v) ← v // I am root
5. Send SetRoot (v, r(v)) to neighbors
6. end if
SetRoot (u, r): // Receive root r from neighbor u
1. R0 = R (v)
Figure 1. Learning of adjacent clusters. m∈Γ(v) m
2. if (p(v) = u) ∧ (r(v) = r) then
3. r(v) ← r
4. Send SetRoot(v, r(v)) to neighbors
4.2 Construction of clusters 5. ∀m ∈ Γ(v), Rm (v) ← Rm (v) {r(v)}
6. else if (r(v) = r) ∧ (r = ⊥) then
The construction of clusters follows the idea of a greedy 7. Ru (v) ← {r}
algorithm, where nodes choose a neighbor with higher ca- 8. else if (r(v) = r) then
pability grade as parent, while other nodes that do not have 9. Ru (v) ← ∅
10. end if
such a neighbor are roots. The message SetRoot is used for 11. if (v = p(v)) ∧ (R0 = R (v)) then
m∈Γ(v) m
propagating the address of the root node to all the mem-
12. Send U pdateInf o (v, Rm (v)) to p(v)
bers of the clusters. The Initialization phase and the event 13. end if
m∈Γ(v)
SetRoot from Algorithm 1 give a formal description for the
construction of clusters.The protocol works as follows: UpdateInfo (u, R): // Receive adjacent clusters R from u
• Nodes that have the highest capability grades among 1. R0 = R (v);
m∈Γ(v) m
their neighbors declare themselves clusterheads and 2. Ru (v) ← R {r(v)}
3. if (v = p(v))) ∧ (R0 = Rm (v)) then
broadcast a SetRoot message announcing their roles. m∈Γ(v)
4. Send U pdateInf o(v, Rm (v)) to p(v)
• The remaining nodes choose as parent the neighbor m∈Γ(v)
5. end if
with the highest capability grade.
• When a node receives a SetRoot message from its par- LinkAdd (u, c): // u added to neighborhood, with capability c
ent, it learns the cluster membership and rebroadcasts 1. Γ(v) ← Γ(v) ∪ {u}
the SetRoot message. 2. if c > c(p(v)) then
3. if (v = p(v)) then
4.3 Knowledge on adjacent clusters 4. Send U pdateInf o (v, ∅) to p(v)
5. end if
Once the clustering structure is set up, the root nodes 6. p(v) ← u // The new neighbor becomes parent
need to establish links to the adjacent clusters. The root 7. Send U pdateInf o (v, Rm (v)) to p(v)
m∈Γ(v)
nodes learn about the adjacent clusters from the nodes 8. end if
9. Send SetRoot (v, r(v)) to neighbors
placed at the cluster borders. During the propagation of
the broadcast message SetRoot down to the leaf nodes, the LinkDelete (u): // u deleted from neighborhood
message is also received by nodes from adjacent clusters. 1. R0 = R (v)
m∈Γ(v) m
These nodes store the adjacent root identity in their Ru (v) 2. Γ(v) ← Γ(v){u} // Remove neighbor
sets and report it to their parents. The information is propa- 3. if u = p(v) then
gated up in the tree with a message which we term Update- 4. choose p(v) ∈ Γ+ (v) such that c(p(v)) = max{c(m)|m ∈
Γ+ (v)}
Info. Through this message, nodes learn the next hops for 5. if p(v) = v then
the paths leading to the clusters adjacent to their sub-trees. 6. r(v) ← v
In particular, the root nodes learn the adjacent clusters and 7. Send SetRoot (v, r(v)) to neighbors
the next hops on the paths to reach their clusterheads. Fig- 8. else
9. if r(v) = r(p(v)) then
ure 1 gives an intuitive example of learning the adjacent 10. r(v) ← r(p(v)) // Update cluster membership
clusters. 11. ∀m ∈ Γ(v), Rm (v) ← Rm (v) {r(v)}
The events of receiving messages SetRoot and Update- 12. Send SetRoot (v, r(v)) to neighbors
13. end if
Info from Algorithm 1 describe how the knowledge and the 14. Send U pdateInf o (v, m∈Γ(v) Rm (v)) to p(v)
paths to adjacent clusters is updated for a given node v. Du- 15. end if
plicate UpdateInfo messages are discarded: a node v sends 16. else if (v = p(v)) ∧ (R0 = R (v)) then
m∈Γ(v) m
the message UpdateInfo to its parent if and only if the set of 17. Send U pdateInf o (v, Rm (v)) to p(v)
m∈Γ(v)
known root nodes changes. 18. end if
933
Authorized licensed use limited to: Annamalai University. Downloaded on August 8, 2009 at 02:40 from IEEE Xplore. Restrictions apply.
4. 4.4 Maintenance in face of topology changes Algorithm 2 Service registration - node v
We analyze how the clustering structure adapts to dy- UpdateInfo (u, R, S):
namic environments. We term the events regarding topol- // receive adjacent clusters R and services S from u
1: R0 = m∈Γ(v) Rm (v)
ogy changes LinkAdd and LinkDelete. Algorithm 1 gives a
detailed description of the behavior of node v when these 2: S0 = m∈∆(v)
Sm (v) ∪ S(v)
events occur. In short, there are two situations where nodes 3: ∆(v) ← ∆(v) ∪ {u}
4: Su (v) ← S
adjust their cluster membership: 5: Ru (v) ← R {r(v)}
• A node discovers a new neighbor with a higher capa- 6: if (v = p(v)) ∧ ((R0 = m∈Γ(v)
Rm (v)) ∨ (S0 =
bility grade than its current parent. The node then se- S (v) ∪ S(v))) then
m∈∆(v) m
lects that neighbor as its new parent. 7: Send U pdateInf o(v, Rm (v), Sm (v) ∪
m∈Γ(v) m∈∆(v)
• A node detects the failure of the link to its parent. The S(v)) to p(v)
node then chooses as new parent the node with the 8: end if
highest capability grade in its neighborhood.
Besides reclustering, topology changes may also require
modifications in the knowledge on adjacent clusters. The dated service information. The process is transparent for
SetRoot message informs nodes about the cluster member- the other nodes in the sub-tree rooted at v. If the overall
ship of their neighbors, while the UpdateInfo message is service information at p0 and p1 changes due to the parent
used for transmitting the updates from children to their par- reselection, the modifications are propagated up in the hier-
ents. We distinguish the following situations: archy.
• A node v detects a new neighbor from a different clus- 5.2 Service discovery
ter. Consequently, v adds the root of that cluster to its
knowledge. The service discovery process uses the distributed direc-
• A node v switches from parent p0 to p1 . Then v (1) tory of service registrations. Suppose a node in the network
notifies p0 to remove the information associated with generates a service discovery request ServDisc. The request
v and (2) sends the list of adjacent clusters to p1 . is first checked against the local registrations. In the case
• A node v detects the failure of the link to one of its where no match is found, the message is forwarded to the
neighbors u. As a result, v erases the knowledge asso- parent. This process is repeated until the ServDisc message
ciated with u. reaches the root of the cluster. When a root node receives
• Any change of global knowledge at node v results in a ServDisc message and it does not find a match in the lo-
transmitting the message UpdateInfo from v to its par- cal registry, the message is forwarded to the roots of the
ent. adjacent clusters. The next hop on the path leading to the
adjacent cluster is decided by every node that acts as for-
5 Service discovery protocol warder of the ServDisc message. Each node v along the
We now present the service discovery protocol, which path checks its Ru (v) sets and picks a neighbor that has a
relies on the clustering structure presented in Section 4. path to the root of the adjacent cluster. In the case where a
link is deleted and v cannot forward the ServDisc message,
5.1 Service registration it chooses another neighbor that provides a path to destina-
Each node keeps a registry of service descriptions of the tion. If such a neighbor does not exist, v informs its parent
nodes placed below in hierarchy. The root node knows all that it no longer has a route to the next cluster. The same
the service descriptions offered by the nodes in its cluster. procedure is repeated until all the paths to destination are
Since the registration process requires unicast messages to tested. If the next cluster is not reachable, the root node
be transmitted from children to parents, it can be easily inte- erases the cluster from its knowledge.
grated with the transfer of knowledge on adjacent clusters. The service discovery reply may follow the reverse
Thus, the message UpdateInfo is used for both service regis- cluster-path to the client, or any other path if a routing pro-
trations and transferring the knowledge on adjacent clusters. tocol is available. For the first case, if there is a cluster par-
Algorithm 2 shows the integrated version of the UpdateInfo tition, the path can be reconstructed using the same search
message, where a node updates the information on both the strategy as for the ServDisc message, where this time the
adjacent clusters and the known services. service is the address of the client.
In the following we describe how the distributed service Caching the service discovery messages is a technique
registry is kept consistent when topology changes. In the that allows us to cope with mobility. Root nodes cache the
case of a parent reselection, a child node v registers the ser- ServDisc messages for a limited period of time. If a newly
vices from its sub-tree with the new parent p1 , and notifies arrived node registers a service for which there is a match
the old parent p0 (if it is still reachable) to purge the out- in the cache, the root node can respond to the old service
934
Authorized licensed use limited to: Annamalai University. Downloaded on August 8, 2009 at 02:40 from IEEE Xplore. Restrictions apply.
5. request. Moreover, when a root node learns of a new adja- In the following, we use the notation C4SD (Clustering
cent cluster, it sends the valid service request entries from for Service Discovery) for our proposed clustering algo-
its cache to the new clusterhead. As a result, the overall hit rithm. N represents the number of nodes in the network,
ratio is improved. r is the transmission range and a is the square side for a
Algorithm 3 describes the protocol without caching im- deployment area of size a × a.
plemented. The message ServDisc has four parameters: the 6.1 DMAC Clustering Algorithm
neighbor u that sends the request, the service description
s, the final destination d of the message (typically a root We choose DMAC as a viable clustering alternative for
node) and a flag f . The flag indicates whether the message our service discovery protocol. Its simplicity and good per-
is a fresh service discovery request, or it is a failure noti- formance results [3] make it suitable for sensor environ-
fication of a previous attempt to reach an adjacent cluster. ments. DMAC achieves fast convergence, as nodes decide
In the latter case, the failed route is erased from the knowl- their roles based only on 1-hop neighborhood information.
edge on adjacent clusters and another message is sent using DMAC constructs the clusters based on unique weights as-
an alternate path. signed to nodes. The higher the weight, the more suitable
is the node for the clusterhead role. The difference with
Algorithm 3 Service discovery - node v our clustering algorithm is that DMAC imposes a maximum
ServDisc (u, s, d, f ): cluster height of one, whereas our protocol in principle may
// receive message ServDisc from neighbor u, requesting service s, des- lead to arbitrary cluster height. For the construction of clus-
tination d, flag f ters, DMAC uses two types of broadcast messages, Clus-
1. if f = T RU E then terhead and Join, announcing the roles of the nodes to their
2. if s ∈ m∈∆(v) Sm (v) ∪ S(v) then
neighbors. The role decision of a node is dependent on the
3. Service found; generate reply
4. else if p(v) = v then decisions of the neighbors with higher weights. Therefore, a
5. for all r ∈ m∈Γ(v) Rm (v) do single topology change may trigger reclustering of a whole
6. Pick m ∈ Γ(v) such that r ∈ Rm (v) chain of dependent nodes. This phenomenon is called chain
7. Send ServDisc(v, s, r, T RU E) to m reaction. For a distributed directory composed of cluster-
8. end for head nodes, the chain reaction leads to high overhead for
9. else if d = r(v) then
10. Send ServDisc(v, s, d, T RU E) to p(v) maintaining consistent service registries. In Section 6.5 we
11. else if d ∈ m∈Γ(v) Rm (v) then study the impact of the cluster height and the chain reac-
12. Pick m ∈ Γ(v) such that d ∈ Rm (v) tion on the performance of the service discovery protocol,
13. Send ServDisc(v, s, d, T RU E) to m in comparison with our proposed clustering solution.
14. else
15. Send ServDisc(v, s, d, F ALSE) to p(v) 6.2 Cluster density
16. end if
17. else The number of clusters is an important measure for the
18. Ru (v) ← Ru (v) {d} performance of a clustering algorithm that is intended to be
19. if d ∈ m∈Γ(v)
Rm (v) then used as a basis for a search mechanism. A high density
20. Pick m ∈ Γ(v) such that r ∈ Rm (v) of clusterheads leads to a large number of loops that occur
21. Send ServDisc(v, s, d, T RU E) to m
22. else if p(v) = v then
during the discovery process.
23. Send ServDisc(v, s, d, F ALSE) to p(v) We consider the nodes distributed on an area accord-
24. end if ing to a homogeneous Poisson point process with density
25. end if
ρ = N/a2 . The spatial distribution of the root nodes for
both clustering algorithms belongs to the family of hard-
core point proccesses [11], in which the constituent points
6 Performance evaluation
are forbidden to lie closer together than a certain minimum
In this section we evaluate the proposed clustering algo- distance. For our clustering algorithm, we approximate the
rithm by comparing it to DMAC [2], and we measure the cluster density by using the Mat´ rn hard-core process. The
e
performance of the service discovery protocol when using retaining probability of nodes that become roots is the fol-
both clustering schemes as distributed directory structures. lowing:
Firstly, we briefly describe DMAC and provide a theoretical 1 2
PC4SD = (1 − e−ρπr ) (1)
comparison between the two algorithms regarding the clus- ρπr2
ter density. Secondly, we introduce the general setting for
both static and dynamic simulation experiments. Finally, This result enables us to compute the estimated number
we present the simulation results, including a performance of clusters:
evaluation of the service discovery protocol running on both a2 N πr 2
structures under the same topological conditions. EC4SD = PC4SD N = 2
(1 − e− a2 ) (2)
πr
935
Authorized licensed use limited to: Annamalai University. Downloaded on August 8, 2009 at 02:40 from IEEE Xplore. Restrictions apply.
6. The results obtained by Bettstetter [3] for the DMAC In the dynamic experiments we use a simplified version
clustering algorithm indicate the following probability for of the random waypoint model [7]. We assume that the mo-
a randomly chosen node to become clusterhead: bile nodes represent people walking, so the dynamics of the
network is moderate. The transmission range is r = 0.2a.
1 At the beginning, nodes are randomly placed on the simula-
PDM AC = ρπr 2
(3)
1+ 2 tion area, where they stay for a specified period of time. Af-
ter this time expires, they choose a random destination and
Thus, the estimation for the number of clusterheads in start moving towards that destination. Nodes are moving at
DMAC is: 1m/s, the approximate speed of a walking person. Upon
1 arrival at the destination, nodes pause for 30 seconds before
EDM AC = PDM AC N = 2 (4) restarting the process. Due to the initialization problems
1
N + πr2
2a that characterize the random waypoint mobility model [4],
we discard the initial 1000 seconds of simulation time in
From Eq. 2 and 4 it can be easily shown that:
each simulation trial and we count the number of messages
• EC4SD < EDM AC for the next 1000 seconds. We average the results over at
• for r and a fixed, the function f (N ) = EDM AC − least 50 simulations.
EC4SD is strictly increasing
• limN →∞ EDM AC = 2 limN →∞ EC4SD 6.4 Cluster height
We can conclude that C4SD has a lower cluster density, In the first set of experiments we measure the average
and the difference in the number of clusters built by the cluster height for our proposed clustering algorithm, and we
two protocols increases with the network density. More- show that it is a function only of the expected number of
over, C4SD almost halves the total number of clusters for neighbors (or node degree). The expected node degree for
saturated areas. a Poisson point process is [3]:
6.3 Simulation settings r2 π
E(D) = N (5)
For our experiments we use the OMNeT++ [12] simula- a2
tion environment. We generate a random network, by plac- We experiment with three transmission ranges: 0.1a,
ing N nodes uniformly distributed on a square area of size 0.2a and 0.3a. Figure 2 shows the results for these three
a × a, where a = 500m. We consider links to be bidirec- values, as a function of the expected node degree, with the
tional, so nodes have the same transmission range, r. There 5th and 95th percentile values as error bars. We can notice
is a link between two nodes if the distance between them that for all the three transmission ranges, the points follow
is less or equal to r. Each node chooses a capability grade the same curve. Consequently, our first conclusion is that
from a uniform distribution. Static nodes have higher capa- the average cluster height does not depend on the number
bility grades than mobile nodes. of nodes, but it depends only on the expected number of
We test the performance of both clustering algorithms neighbors. The second conclusion is that the average clus-
under the same topological conditions. We implement on ter height is lower than 2, and at least 95% of the clusters
DMAC the algorithm for maintaining the knowledge on ad- have the hight lower or equal to three. This important re-
jacent clusters and for updating the service registry, using sult indicates that we can achieve relatively small-height
the UpdateInfo message. We use a heartbeat broadcast mes- clusters without imposing a maximal hop diameter limit,
sage periodically sent by every node to maintain the neigh- which would increase the maintenance effort and generate
borhood information and to trigger the events LinkAdd and the chain reaction effects.
LinkDelete. The heartbeat is also used for the cluster setup
and maintenance, replacing the SetRoot message for C4SD 6.5 Service discovery performance
and the Clusterhead and Join messages for DMAC. The fo- We test the performance of the service discovery pro-
cus of our comparative simulations is the overhead induced tocol using both DMAC and C4SD. Due to the mentioned
by the UpdateInfo and ServDisc messages in dynamic envi- dissimilarities between the two protocols, we expect differ-
ronments. ent behaviors when using them for discovery purposes: (1)
For measuring the cluster height of C4SD we use the the chain reaction of DMAC determines reclustering and
cyclic distance model for link formation, in order to avoid re-registration of services with new clusterheads, implying
the border effects [3]. In this model, nodes at the border higher maintenance overhead; (2) smaller-height clusters
of the system area establish links via the borderline to the achieve faster convergence and higher hit ratio; (3) fewer
nodes located at the opposite side of the area. This setup clusters implies fewer loops in the discovery process and
approximates an area where nodes are distributed according consequently, lower discovery overhead. We evaluate the
to a Poisson point process [3]. energy-efficiency in terms of communication costs.
936
Authorized licensed use limited to: Annamalai University. Downloaded on August 8, 2009 at 02:40 from IEEE Xplore. Restrictions apply.
7. 4
r=0.1a We analyze the behavior further in terms of maintenance
r=0.2a
3.5 r=0.3a overhead when increasing the network mobility. Figure 4
Average cluster height 3 shows the experimental results with 100 nodes and percent-
2.5 age of mobile nodes between 10% and 90%. We count the
2 average number of messages per second sent and received
1.5 by a node. C4SD behaves progressively better when in-
1 creasing the network mobility. The reason is that the chain
0.5 reaction inherent to DMAC triggers additional maintenance
0 overhead of the directory structure, where the service in-
0 5 10 15 20 25 30
Expected node degree formation and knowledge on adjacent clusters needs to be
updated at the new clusterheads. The more dynamic the
network, the more probable is this reaction to occur.
Figure 2. Average cluster height.
0.4
C4SD
Average number of messages per second
DMAC
0.35
6.5.1 Maintenance overhead
0.3
In the first experiment we study the impact of the network 0.25
density over the maintenance overhead (number of Update- 0.2
Info messages), when 50% of the nodes are moving acord- 0.15
ing to the mobility model described in Section 6.3.
0.1
When a node moves from one cluster to another, the old 0.05
service registration is deleted and a new registration is sent
0 10 20 30 40 50 60 70 80 90 100
to the new clusterhead. However, the knowledge on ad- Percentage of moving nodes
jacent clusters needs more overhead, since the changes are
propagated to the root nodes of the adjacent clusters. On the
one hand, due to lower cluster density, C4SD has a lower Figure 4. Average number of UpdateInfo mes-
overhead of maintaining the knowledge on adjacent clus- sages depending on the percentage of mov-
ters. On the other hand, the service registration is cheaper ing nodes.
for DMAC due to the smaller cluster height. We are in-
terested to examine the tradeoff of cumulative maintenance
overhead with different network densities.
6.5.2 Hit ratio
Figure 3 shows the average number of messages sent
and received by a node in the network per second. For Since C4SD has an average cluster height bigger than
sparse networks, where there are few neighboring clusters, DMAC, the convergence of service registrations is slower.
the DMAC protocol behaves better. For dense networks, In consequence, we expect DMAC to have a better hit ratio.
the effort for maintaining the knowledge of adjacent clus- For a fair comparison, we assume that each node provides
ters becomes prevalent over the overhead of service regis- exactly one service and for each service there is exactly one
trations, and thus C4SD overtakes DMAC. service provider. We generate random service requests from
arbitrary chosen nodes. During 1000 seconds of simulation
0.18
C4SD
time we issue 10 service requests, with a delay of 100 sec-
Average number of messages per second
DMAC
0.17 onds. If the service request reaches the matching service
0.16 provider, we have a hit.
0.15
In our first experiments, no caching mechanism is
0.14
0.13
involved. Figure 5 shows the results depending on the per-
0.12
centage of moving nodes. As expected, DMAC performs
0.11
better than C4SD due to faster convergence. However,
0.1 DMAC hit ratio drops similarly when increasing the
0.09 network mobility. In our second set of experiments we
40 60 80 100 120 140 160 180 200
Number of nodes implement a limited-time caching of service requests (see
Section 5.2). By implementing caching we obtain a high
Figure 3. Average number of UpdateInfo mes- hit ratio for both protocols, which is above 0.98 for all
sages depending on the number of nodes. mobility cases that we consider (see Figure 5).
937
Authorized licensed use limited to: Annamalai University. Downloaded on August 8, 2009 at 02:40 from IEEE Xplore. Restrictions apply.
8. 1
Our comparison with DMAC shows different perfor-
0.98 mances of the service discovery protocol depending on the
0.96 underlying clustering structure. We show that the chain re-
0.94
action of DMAC determines reclustering and re-registration
Hit ratio
of services with new clusterheads, implying higher mainte-
0.92
nance overhead. Our clustering algorithm achieves fewer
0.9
clusters and consequently, lower discovery overhead. The
C4SD no cache
0.88 DMAC no cache
C4SD cache
smaller-height clusters of DMAC leads to faster conver-
0.86
DMAC cache
gence and higher hit ratio. The hit ratio is improved to more
0 10 20 30 40 50 60 70 80 90 100
Percentage of moving nodes than 98% for both protocols if a mechanism of limited-time
caching is implemented for service discovery messages.
Our protocol has a lower discovery cost in both implemen-
Figure 5. Hit ratio tation alternatives.
For future work, we consider introducing dynamic capa-
bility grades, in order to avoid overloading the root and par-
6.5.3 Discovery cost
ent nodes with service registrations. The idea is that nodes
We are interested in the number of ServDisc messages ex- that reach their memory limit decrease the capability grade
changed during one service discovery phase. Since C4SD and thus, a part of their children will register to other nodes.
has a lower cluster degree, we expect that it also experiences
References
a lower discovery cost. Figure 6 shows the average number
of service discovery messages per node, sent and received [1] M. Balazinska, H. Balakrishnan, and D. Karger. INS/Twine:
A scalable peer-to-peer architecture for intentional resource
during one service discovery phase. We notice that caching discovery. In Pervasive ’02, pages 195–210, August 2002.
implies more messages spent in the service discovery phase. [2] S. Basagni. Distributed clustering for ad hoc networks. In
The discovery cost is significantly smaller for C4SD, due to ISPAN ’99, pages 310–315, Washington, DC, USA, 1999.
the lower cluster density. Moreover, DMAC experiences a IEEE Computer Society.
[3] C. Bettstetter. Mobility Modeling, Connectivity, and Adap-
rapid growth in the discovery cost when caching is imple- tive Clustering in Ad Hoc Networks. PhD thesis, Technische
mented. Universit¨ t M¨ nchen, Germany, Oct. 2003.
a u
[4] T. Camp, J. Boleng, and V. Davies. A survey of mobility
6
C4SD no cache
models for ad hoc network research. WCMC: Special is-
C4SD cache
DMAC no cache
sue on Mobile Ad Hoc Networking: Research, Trends and
5 DMAC cache Applications, 2(5):483–502, 2002.
Average number of messages
[5] S. Das, C. E. Perkins, and E. M. Royer. Ad hoc on demand
4
distance vector (AODV) routing. Internet-Draft Version 4,
3 IETF, October 1999.
[6] C. Frank and H. Karl. Consistency challenges of service
2 discovery in mobile ad hoc networks. In MSWiM ’04, pages
105–114, New York, NY, USA, 2004. ACM Press.
1 [7] D. B. Johnson and D. A. Maltz. Dynamic source routing in
ad hoc wireless networks. In Imielinski and Korth, editors,
0
0 10 20 30 40 50 60 70 80 90 100 Mobile Computing, volume 353, pages 153–181. Kluwer
Percentage of moving nodes
Academic Publishers, 1996.
[8] U. C. Kozat and L. Tassiulas. Service discovery in mobile
ad hoc networks: An overall perspective on architectural
Figure 6. Average number of ServDisc mes- choices and network layer support issues. Ad Hoc Networks,
sages. 2(1):23–44, June 2003.
[9] R. Marin-Perianu, H. Scholten, and P. Havinga. CODE: A
description language for wireless collaborating objects. In
ISSNIP ’05, pages 169–174. IEEE Computer Society Press,
December 2005.
7 Conclusions [10] V. D. Park and M. S. Corson. A highly adaptive distributed
routing algorithm for mobile wireless networks. In INFO-
This paper proposes an energy-efficient solution to ser- COM’97, volume 3, pages 1405–1413. IEEE, April 1997.
vice discovery in wireless sensor networks. The discovery [11] D. Stoyan, W. S. Kendall, and J. Mecke. Stochastic Geome-
protocol relies on a clustering structure that offers distrib- try and its Applications. John Wiley and Sons, 1995.
[12] A. Varga. The omnet++ discrete event simulation system. In
uted storage of service descriptions. The clusterheads act as ESM’01, Prague, Czech Republic, June 2001.
directories for the services in their clusters. The structure [13] J. Wu and M. Zitterbart. Service awareness in mobile ad hoc
ensures low construction and maintenance overhead, avoids networks. Boulder, Colorado, USA, March 2001. Paper Di-
the chain-reaction problems and keeps a sparse network of gest of the 11th IEEE Workshop on Local and Metropolitan
Area Networks (LANMAN).
nodes in the distributed directory.
938
Authorized licensed use limited to: Annamalai University. Downloaded on August 8, 2009 at 02:40 from IEEE Xplore. Restrictions apply.