This document provides a good management-lever introduction to the Data-Distribution Service (DDS) technology and capabilities. It was prepared by the OMG at the request of the US Navy in order to educate on the data-centric software architectural principles of DDS and how they can help meet its agility and cost-control requirements.
DDS - The Proven Data Connectivity Standard for the Industrial IoT (IIoT)
Management High-level overview of the OMG Data Distribution Service (DDS)
1. Copyright 2011, Object Management Group (OMG). All Rights Reserved.
Data Distribution Service
(DDS) Brief
Standards-Based Data-Centric Messaging
from the Object Management Group (OMG)
1 Executive Summary
U.S. Navy SPAWAR-ATL Engineering Competency requested the Object Management Group
(OMG) to deliver a paper describing the technical capabilities of its Data Distribution Service
(DDS). This paper complements an earlier one prepared in collaboration with several DDS
vendors for the Office of the Secretary of Defense (OSD), which describes DDS adoption in
military and commercial applications. That paper, “The Data Distribution Service: Reducing
Cost through Agile Integration,” is hosted online by the UAS Control Segment (UCS) program1.
Navy decision makers are being asked to respond more quickly on the basis of increasing
volumes of information. This information is sourced from multiple systems of systems executing
on heterogeneous platforms and networks. To face this challenge, the Navy needs to increase its
leverage from proven technology and increase the integration between existing systems. Navy
leadership has embraced these requirements with mandates for Open Architecture integration
based on open standards and off-the-shelf products. These principles help the Navy to align its
technology roadmap with broader industry directions and to empower competitive markets that
reduce vendor lock-in and drive down costs.
The OMG has long been a favored venue for the collaboration of Navy interests with industry
thought leaders around the promulgation of relevant standards. Navy Surface Warfare Center
(NSWC), Navy Undersea Warfare Center (NUWC), Boeing, Lockheed Martin, General
Dynamics, Northrop Grumman, and other U.S. and allied organizations are all active
participants. DDS technology in particular has been rapidly and widely adopted by these
organizations. This adoption has been driven by the ease and flexibility with which it can be used
to develop, maintain, and integrate complex systems while maintaining strong performance and
governance. DDS is supported by a large vendor community and has been called out in U.S.
DoD guidance from Navy Open Architecture, DISA, NESI, UCS, and other U.S. and allied
organizations. This guidance has been born out in hundreds of defense and civilian programs,
and DDS implementations exist at Technology Readiness Level (TRL) 9.
This paper describes the software architectural principles that can help the Navy to meet its
agility and cost-control requirements. It further describes how DDS technology in particular
supports this architecture—not just hypothetically but in real-world systems—in unique and
powerful ways.
1
See http://www.ucsarchitecture.org/downloads/DDS%20Exec%20Brief%20v20l-public.pdf.
—1 of 20—
2. Copyright 2011, Object Management Group (OMG). All Rights Reserved.
Table of Contents
1
Executive Summary.................................................................................................................1
2
Step 1: System Architecture ...................................................................................................2
2.1
Benefits.................................................................................................................................................... 4
2.2
Challenges
Facing
Traditional
Implementations ...................................................................... 5
2.3
An
Improved
Approach
to
Managing
Data-Centricity.............................................................. 7
3
Step 2: Supporting the Architecture ......................................................................................8
3.1
Data-Centric
Messaging ..................................................................................................................... 8
3.2
DDS............................................................................................................................................................ 9
4
Step 3: Instantiating the Architecture .................................................................................11
4.1
Topology ...............................................................................................................................................12
4.2
Disadvantaged
Networks ................................................................................................................13
4.3
Scalability .............................................................................................................................................14
4.4
Security..................................................................................................................................................15
5
Conclusion ..............................................................................................................................17
6
Appendix: Technology Comparison ....................................................................................17
6.1
Specification
Comparison................................................................................................................18
6.2
Vendor
Comparison ..........................................................................................................................19
2 Step 1: System Architecture
Industry has grappled for over a decade
with the problem of deploying and
maintaining groups of applications that
on the one hand need to integrate with
one another, but at the same time need
to remain decoupled, so that they can
join and leave the network
dynamically, and so that they can
evolve according to their independent
life cycles.
The architecture they have followed is
data-centric. Data-centric architecture
is often instantiated in so-called “n-
layer” or “n-tier” enterprise systems.
Stateful data is maintained by
infrastructure, and applications remain
Figure 1—Schematic of a data-centric, n-layer architecture
—2 of 20—
3. Copyright 2011, Object Management Group (OMG). All Rights Reserved.
stateless. Applications do not communicate directly with one another; instead their interactions
are mediated by the data and expressed in terms of changes to that data.
This architecture is described as “data-centric” because it organizes applications and their
interactions in terms of stateful data rather than in terms of operations to be performed. It
conforms to the following principles:
1. The structure, changes, and motion of stateful data must be well defined and
discoverable, both for and by humans as well as automation. What do we mean by
“state”? State consists of the information about the application, the system, and the
external world that an application needs in order to interpret events correctly. For
example, suppose there is an announcement, “the score is four to three”. What game is
being played? Who are the players? Which one of them has four points and which three?
The answers to these questions comprise the state that is necessary to understand the
message.
2. State must be managed by the infrastructure, and applications must be stateless.
(This is also a recognized SOA pattern called “State Repository”2.)
3. State must be accessed and manipulated by a set of uniform operations. What do we
mean by “operations”? Operations express attempts to change the state. In a data-centric
architecture, the operations are uniform3. These operations are often referred to by the
acronym CRUD, for Create, Read, Update, and Delete, because most supporting
technologies define parallels for these concepts4.
Multiple technologies directly support this architecture, including relational databases, RESTful
web services5, and OMG Data Distribution Service.
Consider a hypothetical distributed game of chess.
• A non-data-centric implementation might assume that all parties understand the initial
layout of the game. Then players would send their moves to one another—“pawn 4 to c3”
for example. Such an implementation further assumes that each recipient has out-of-band
access to its own copy of the current state of the board so that it can change that state
accordingly and that each player receives every message so that different copies don’t get
out of synch with one another.
• A data-centric implementation would present a common picture of the board to
authorized parties and allow them to query and modify it—to not only say that pawn 4
should move to c3, but also to ask what is at c3 beforehand. This state is maintained by
the infrastructure; applications do not need to maintain their own copies. And
applications act of this state independently of which other applications may or may not be
2
See http://soapatterns.org/state_repository.php for an introduction to this pattern.
3
Computer science uses the term “polymorphism” to describe a situation in which a common interface may be used
to access different kinds of resources. Polymorphism helps software fit together like puzzle pieces: a component that
understands a particular interface can communicate with any other component that understands the same interface.
Data-centric architecture takes polymorphism to its logical conclusion: all state shares a single common set of
operations.
4
In SQL, the uniform operations are INSERT, SELECT, UPDATE, and DELETE. In HTTP, they are POST, GET,
PUT, and DELETE. In DDS, they are write, read, dispose, and unregister.
5
For a brief introduction to the Representation State Transfer (REST) pattern, see
http://en.wikipedia.org/wiki/Representational_State_Transfer.
—3 of 20—
4. Copyright 2011, Object Management Group (OMG). All Rights Reserved.
observing. (Note that a distributed infrastructure may communicate within itself using
messages, but applications are written to a higher level of abstraction.)
The following sections describe the benefits of a data-centric approach, the challenges faced in
traditional implementations of the approach, and how data-centric messaging technologies like
DDS overcome those challenges.
2.1 Benefits
The benefits of data-centricity derive from the loose coupling between applications. They do not
communicate directly; instead, one modifies a given data object, and another observes the
change.
2.1.1 Reliability and Resiliency
Applications have the ability to obtain from the infrastructure the current state of the world in
which they’re interested. Therefore, if an application fails and has to be restarted, it can recover
very quickly. In contrast, if the application is not stateless, a restart is expensive and risky.
Message senders must store all messages that they sent during the failure and replay them upon
reconnection, because if the recovering application misses even a single message, its state will be
out of synch, and it will act on incorrect information. If message rates are high relative to the
recovery time, storing these messages will become infeasible.
For example, consider an intermittent network link, such as a satellite or line-of-sight radio.
Applications separated by such a link and architected in a data-centric way will be able to resume
communication by sending only the differences between the relevant pre-disconnection state and
the current post-reconnection state. This data volume is bounded in size and often much less than
the sum of all messages that might have been exchanged in the mean time.
2.1.2 Integration Complexity
It is best, when integrating multiple
90
elements (applications or entire
subsystems), to avoid mediating 80
Lingua
every element to every other. Such a 70
Franca
design requires (n * (n – 1)) 60
Pattern
integrations per n elements—the 50
complexity, effort, and cost of the 40
integration increase with the square Point-‐
30
of the number of elements. Instead, to-‐
employ the well-known Lingua 20
Point
Franca architectural pattern: design a 10
Integ'ns
normalized model, and integrate each 0
element with that model. Each 1
2
3
4
5
6
7
8
9
element need only be integrated once, Figure 2—Relative complexity of point-to-point integration vs.
and the complexity of the integration applying the Lingua Franca Pattern, in the worst case
therefore increases linearly with the
number of elements.
—4 of 20—
5. Copyright 2011, Object Management Group (OMG). All Rights Reserved.
This pattern applies to state as well as to behavior. A set of point-to-point integrations in which
neither state nor operations is normalized will consequently scale even worse than n2. This
additional complexity can be reclaimed by normalizing the programming interface and the
message set with an ESB; complexity growth returns to n2. A data-centric architecture takes the
next step: operations are uniform, and state is normalized. The growth in the complexity is
therefore linear.
2.1.3 System Evolution Cost
Because applications have no awareness of one another, they can be added to or removed from
the system with much lower impact. Changes are limited in scope—replacing one component
with another does not require that all other components be updated as well.
Consider again the chess example above. What if I want to add a new application to the game—
perhaps to provide move recommendations, a turn timer, a GUI display, or other capability? Any
of these can be built based on the state of the board that I already have, and no other application
that uses that state needs to know or care that it’s being used for one more thing. A stateless ESB
cannot provide the same benefit, because it provides applications with no ability to query the
current state of the board—it deals only with stateless messages, not with stateful data.
2.1.4 Acquisition Flexibility
A standards-based data-centric system provides interoperability not only at the level of messages
on the network but also at the level of an operational picture. This higher level of interoperability
decreases lock-in not only to middleware vendors but to integrators as well, because the
integration is fundamentally governed. The information about which information is to be
exchanged under which conditions is captured in explicit configuration, not buried in application
code, and is accessible to any authorized vendor using industry-standard tools.
2.2 Challenges Facing Traditional Implementations
Before the advent of data-centric messaging, data-centric designs were primarily based on
proprietary and/or web-based protocols connecting “client” applications to relational databases.
Such implementations remain valid for many systems, but they also face significant challenges—
challenges that tempt some applications to abandon the architecture. This section describes some
of those challenges and the unfortunate result.
2.2.1 Challenges: Scalability, Reliability, Latency, and Management
Challenge #1: Vulnerabilities of shared infrastructure. Shared infrastructure, such as
databases and servers, can become a performance bottleneck and single point of failure.
Challenge #2: Synchronization of federated state. All applications may not have access to a
single common state repository. In such cases, it’s necessary to maintain copies of the relevant
state in multiple locations and to synchronize them independently of the applications. This is a
difficult task that not all teams are equipped to tackle.
Challenge #3: Data access latency. Messaging between the application that wants a piece of
state and the repository that has it can be slow. Response times may be acceptable in cases where
nothing changes faster than a human can process it—a person with a web browser is a good
—5 of 20—
6. Copyright 2011, Object Management Group (OMG). All Rights Reserved.
example. But for machine-to-machine interactions, or machine-to-human interactions, this
latency proves much too high.
2.2.2 Typical Result: Brittle, Unmanageable Systems
Unfortunately, too often the challenges above lead architects to abandon data-centricity for ad
hoc approaches.
Figure 3—Tangled communication resulting from the application of messaging technology without a
governing architecture
• Rather than allowing their actions to be mediated by the data, applications send messages
directly to one another. They may use abstractions like messages queues, but these
patterns remain fundamentally point-to-point.
• Rather than relying on the state repository to manage their data, every application
maintains its own state.
In effect, system-level state management is neglected. Does our experience lead us to believe
that when we don’t design something, it will nevertheless work well? The result instead is
systems that are brittle and difficult to manage.
Why brittle?
• Applications are coupled to one another, so they can’t come and go, or evolve over
time, independently.
Implications: Decreased operational flexibility and increased costs for maintenance and
integration.
• State is coupled to individual applications, not maintained independently, so new
applications can’t reuse the state that’s already there, and existing applications can’t
recover their state if they restart or relocate on the network.
Implications: Decreased reliability and resiliency and increased cost to develop and
integrate incremental functionality.
—6 of 20—
7. Copyright 2011, Object Management Group (OMG). All Rights Reserved.
Why unmanageable?
• Data structure is ad hoc, so it’s impossible to detect whether a piece of information is
malformed until a particular application tries to access it. By then, it’s too late to avoid
and hard to debug.
Implication: Error detection will occur later in the process, when it’s more expensive to
fix and has a bigger impact on schedules.
• Data movement around the network is ad hoc, so as each application maintains its own
view of state, these views can get out of synch. Applications act on incorrect or obsolete
information and can’t respond in a timely way to new information.
Implication: Decreased reliability.
• Data change is ad hoc, so making sure that the right parties see the right changes in an
acceptable amount of time is the responsibility of every application—or else everything
has to be sent to a single central party on the network, and that one has to know
everything and never fail.
Implications: Increased upfront cost due to duplicated application-development effort,
increased maintenance costs due to inter-application coupling, and decreased reliability if
a single point of failure is introduced.
2.3 An Improved Approach to Managing Data-Centricity
The challenges described above can be solved in a more scalable way while retaining the data-
centric architecture. This preferred approach relies on data-centric messaging, which is described
in section 3.1 below.
Figure 4—Data-centric messaging improves scalability of data-centric architecture
Challenge #1: Vulnerabilities of shared infrastructure. Federate state management to where
it’s needed. Each portion of the network has independent access to exactly the state it needs at
that moment and no more. This is the logical conclusion of server federation: the more broadly
you federate, the smaller the burden is on any one party; each can remain lightweight. And there
are no longer any single points that can take out a whole network.
—7 of 20—
8. Copyright 2011, Object Management Group (OMG). All Rights Reserved.
Challenge #2: Synchronization of federated state. This burden should not be on the
applications; it should be on a best-in-class infrastructure. Point-to-point messaging between
applications is replaced by data-centric publish-subscribe messaging within that infrastructure.
Instead of full consistency, it seeks eventual consistency. That guarantee is easier to maintain
under challenging network conditions, and the implementation can be orders of magnitude faster.
Challenge #3: Data access latency. Because we’re employing a solution based on distributed
state with eventual consistency, we can treat large-scale, long-term persistence as a separate
concern from application access. We can eliminate most databases and instead place lightweight
in-memory caches very close to each application—on the same node, or even within the same
process—to maximally reduce this latency. Meanwhile, we can place high-availability
persistence stores on appropriately provisioned nodes elsewhere on the network.
3 Step 2: Supporting the Architecture
To support this architecture, we need technology that can govern data contents, as a database
can, as well as governing communication flows within complex networks, as messaging buses
can. And it must go beyond conventional message buses—it must tie messages back to the
underlying data objects and formally describe how those objects will be synchronized across the
network as they change.
3.1 Data-Centric Messaging
Data-centric messaging is the application of messaging to the distribution, access, and
manipulation of stateful data. Data-centric messaging supports data-centricity for data in motion,
just as a relational database does for data at rest. The vendor community has been supporting
such technology, called data-centric messaging, for over ten years. This section describes the
technology generally.
As described in section 2.3 above, an architecture
that employs data-centric messaging offers
significant benefits over one instantiated based on
solely on the basis of databases or solely on the
basis of non-data-centric messaging.
• Reduced integration and maintenance costs,
as with any data-centric technology.
• Improved performance. Where a web
service connected to a database might
deliver a dozen data updates per second, an Figuredata. database stores data. A data bus
moves
5—A
efficient data-centric messaging bus can
deliver tens or even hundreds of thousands.
• Improved scalability. Where centralized infrastructure might support a few connected
applications, a decentralized data bus can support hundreds or thousands—on less
hardware.
• Improved reliability and resiliency. Single points of failure have been eliminated.
• Improved manageability. The infrastructure enforces explicit contracts on the structure of
data, how it moves across the network, and how it changes over time. And if and when
—8 of 20—
9. Copyright 2011, Object Management Group (OMG). All Rights Reserved.
these contracts are violated, it prevents incorrect communication, takes mitigating action,
and notifies the relevant applications.
Between 2001 and 2002, several of these vendors came together at the OMG to begin work on
standardizing data-centric messaging technology. The result, the Data Distribution Service
(DDS), is the subject of the next section.
3.2 DDS
The Data Distribution Service is the standard for data-centric messaging. Adopted by the OMG
in 20036, DDS now comprises a comprehensive suite of layered specifications. In particular, it is
the only widely deployed standards-based middleware technology to define code portability as
well as on-the-network interoperability.
• DDS itself, which defines the behavior of the bus itself as well as programming interfaces
in several languages. Thirty-six companies voted to adopt the original specification,
including Ericsson, Fujitsu, IBM, Lockheed Martin, MITRE, Mercury Computer
Systems, Nokia, Objective Interface Systems, Oracle, PrismTech, RTI, Rockwell Collins,
and THALES. The Navy Surface Warfare Center (NSWC) played a significant
supporting role. Today, approximately ten vendors support the specification.
• A network protocol, called Real-Time Publish-Subscribe (DDS-RTPS), which provides
interoperability among DDS implementations. This specification became available
through the OMG in 2008 at version 2.0. (It was based on an earlier specification, RTPS
1.0, which was standardized through the IEC in 2004.) The current version, 2.1, became
available in January 2009. Most vendors now support this protocol, and interoperability
has been publicly demonstrated at a number of OMG-hosted events.
• Integration with UML models to bridge the gap from design to implementation. This
UML profile was adopted in 2008.
• Enhancements to the type system to address system evolution and more flexible data
views. This specification was adopted in 2010 and is currently in the process of
finalization and implementation with the
involvement of multiple vendors. Founded in 1989, OMG is now the
• Improvements to the C++ and Java largest and longest-standing not-for-
programming interfaces to enhance profit, open-membership consortium
portability, performance, and ease of use. developing and maintains computer
These specifications were adopted in 2010 industry specifications with more than
and are currently in the process of 470 member companies. It is
finalization and implementation with the continuously evolving to remain current
involvement of multiple vendors. while retaining a position of thought
• etc. leadership.
DDS continues to define one of the most active All OMG specifications are freely
communities within the OMG. In addition to available to the public from
ongoing direct collaboration among member www.omg.org.
organizations, the OMG hosts quarterly in-person
6
OMG issued an RFP for the definition of a data-centric publish-subscribe messaging bus in late 2001. Initial
proposals were received from several vendors in 2002. The first version of the specification was preliminarily
adopted in 2003 and finalized in 2004. The current version, 1.2, became available in January 2007.
—9 of 20—
10. Copyright 2011, Object Management Group (OMG). All Rights Reserved.
technical meetings. OMG also hosts an annual workshop on time-critical systems that in recent
years has become increasingly focused on DDS technology. And the ecosystem continues to
grow, with new vendors joining the community and specifications for security enhancements and
web connectivity in progress.
3.2.1 Adoption
DDS has been adopted and/or mandated by many military and civilian organizations.
DDS plays a major role in naval combat systems in the U.S. and worldwide. It has been designed
into the Aegis, SSDS, and DDG 1000 programs and is deployed by allied navies, including those
of Germany, Japan, the Netherland, and over a dozen more. DDS has been adopted by the
following organizations:
• U.S. Navy—Open Architecture, FORCEnet
• Defense Information Systems Agency
(DISA)—Mandated standard within the DoD
Information Technology Standards and Profile
Registry (DISR)
• U.S. Air Force, Navy—Net-centric Enterprise
Solutions for Interoperability (NESI)
• U.S. Army, OSD—UAS Control Segment
(UCS)
• U.S. intelligence community
• UK Ministry of Defence—Generic Vehicle
Architecture, an interoperable open architecture
for unmanned vehicles
DDS is also used commercially in a number of industries, including communication,
transportation, financial services, SCADA, industrial automation, agriculture, power generation,
air traffic control, mobile asset tracking, and medicine. A number of universities worldwide are
using DDS in active research projects, including MIT, Carnegie Mellon University, and
Vanderbilt University in the U.S. and ETH Zurich, Technical University of Munich, and Korea
Advanced Institute of Science and Technology internationally.
The following sections describe some of the capabilities of DDS. These are not capabilities
specific to a particular vendor; they are specified within the DDS standard.
—10 of 20—
11. Copyright 2011, Object Management Group (OMG). All Rights Reserved.
3.2.2 Managed Data Distribution
Non-data-centric messaging technologies just provide Standards-Based Governance
ways to send messages from A to B. Architects must
Data structure
develop their own idioms on top.
Data value history
DDS is different. Like HTTP, DDS uses the technique of
messaging to support system architecture. Because the Rate of data value change
data model is clear and explicit rather than implicit in
Data ordering and coherency
static code, DDS can define, propagate, and govern data
flows more comprehensively and more efficiently. DDS Lifespan/TTL
provides: Network partitions
• Formal data design and integration to avoid lock-
Resource utilization
in to vendors and integrators
• Strong enforcement of data structure and quality Priority
of service to make propagation more efficient and Reliability
catch errors sooner
• Comprehensive status monitoring to detect and Durability, persistence, and high
mitigate potential problems at run time availability
• Flexible security with a comprehensive road map Fault tolerance and fail-over
3.2.3 Flexible Deployment Filters based on contents and time
DDS is the only widely deployed messaging technology Publication/subscription matching
to scale from embedded systems to data centers to global
networks. Connection health
• DDS implementations support both peer-to-peer and brokered message-flow topologies,
which can be combined as needed for local, wide-area, and disadvantaged network
environments. See section 4.1, “Topology”, below for more information.
• DDS is interoperable across multiple programming languages, real-time and non-real-
time constraints, and enterprise and embedded platforms.
DDS is compatible with enterprise environments. In addition to support for higher-level
languages like Java, it has an API that is similar to other messaging technologies7. Vendors also
provide a variety of connectors to other standards-based messaging and storage technologies,
including JMS, databases, and web services.
4 Step 3: Instantiating the Architecture
This section applies the architectural principles described above, and the technologies that
support them, to describe the construction of flexible, performant, and affordable systems. It
focuses on three areas: topology, scalability, and security.
7
OMG is the standards body responsible for both the DDS and CORBA specifications. However, these two
technologies work differently and do not depend on one another.
—11 of 20—
12. Copyright 2011, Object Management Group (OMG). All Rights Reserved.
4.1 Topology
Peer-to-peer communication is a fundamental building block of any network communication.
Other topologies—such as brokered
communication—are constructed from it. For
example, a network of data producer communicating
with a data consumer by way of a broker consists of
three peers, one in between the other two. It so
happens that the middle peer is typically provided by Figure 6—Peer-to-peer communication is a
the messaging vendor and provides services to the fundamental building block
other two peers.
4.1.1 Composing Brokered Networks
DDS specifications are defined peer-to-peer8 in order to provide implementers with maximum
flexibility. Most vendors take advantage of this and support peer-to-peer communication as an
option within their products. However, other DDS vendors support only brokered configurations,
while some support peer-to-peer communication but also ship message brokers, so that users can
compose their systems however is most appropriate.
Figure 7—A brokered network is composed of multiple peer-to-peer connections. However, whether that is
reflected in a given messaging implementation varies.
Most vendors of traditional non-data-centric messaging technologies support only brokered
configurations.
4.1.2 Composing Local and Global Networks
Different networks have different characteristics and requirements.
• Local networks support deterministic, low-latency communication. They can often take
advantage of efficient IP multicast. Applications running here may also be more trusted.
• Wide-area networks have higher latencies and may or may not support multicast. They
may route different transport protocols (e.g. TCP but not UDP). Applications connected
across such networks may be less trusted than those running on a LAN.
• Disadvantaged wireless networks have significantly different reliability and performance
characteristics. Applications running here may be the least trusted, given that wireless
connections may be easier to intercept than wired connections.
• Any one of these physical networks, or a logical “subnet” within it, may represent an
independent security domain or subsystem.
8
This situation is not unlike that of other messaging technologies. For example, the non-data-centric Advanced
Message Queuing Protocol (AMQP) is also specified peer-to-peer.
—12 of 20—
13. Copyright 2011, Object Management Group (OMG). All Rights Reserved.
Connecting these heterogeneous environments requires mediation—a broker to filter and
transform the data, cleanse it to meet IA requirements, and ultimately forward it to the other side.
However, brokers may not be needed within the LAN, because it is a more controlled
environment. These opportunities and constraints lead us to design networks such as is depicted
in the following figure.
Figure 8—A composite network taking advantage of peer-to-peer communication on the LAN and brokered
communication across the WAN
Such networks can take advantage of peer-to-peer performance and resilience on the LAN while
mediating data and enforcing security policies at subsystem or network boundaries with data
routers. Persistence and other services can be deployed and relocated as appropriate.
4.2 Disadvantaged Networks
It is never acceptable for applications to act upon obsolete information. When networks are
disconnected, intermittent, or limited in their bandwidth (DIL), this challenge is even more
significant. Messaging technologies have the opportunity to either mitigate or exacerbate it. The
following are a few of the factors to consider:
• Data compactness: The more limited the network’s bandwidth, the more important it is
that the messaging layer does not bloat its payloads with an inefficient data
representation. Larger payloads also take longer to send, increasing the chance that a
network drop will hit in the middle, preventing successful transmission. System designers
sometimes rely on XML to provide data transparency; unfortunately, XML can be bulky.
DDS does support XML payloads but also provides similar benefits using a very compact
binary data representation.
• Protocol responsiveness: The protocol must recover from losses and disconnections
quickly, and while the network is connected, it must use it efficiently. TCP—and
protocols layered on top of it—suffers in this area. While it can provide excellent
performance when connectivity is good, it can be slow to respond to changing network
conditions. And its head-of-line blocking behavior and global timeouts can cause
multiple message streams to halt delivery for an extended period if any one of them
suffers a transitory loss of synchronization.
—13 of 20—
14. Copyright 2011, Object Management Group (OMG). All Rights Reserved.
The DDS-RTPS protocol can be layered on top of TCP. However, most typically it runs
on top of UDP, where it provides reliability and independent quality-of-service control on
a per-stream basis, avoiding cross-stream interference and extended blocking.
• Bounded resource use: A typical durable messaging technology operating over an
intermittent link must store every message that was sent while the link was down and
replay those messages when the link is restored. If the link goes down for an extended
period, the resources needed to store these messages can grow in an unbounded fashion.
And upon reconnection, replaying those messages will take an increasing amount of time.
At some point, the relationship between the data rate, the available bandwidth, and the
likelihood of network disconnection will reach a tipping point, at which it will be
impossible to replay the messages cached from the previous disconnection before the
next disconnection occurs. At this point, the network, while connected, will be
continually full, but receiving applications will be permanently behind.
A data-centric message design eliminates dependencies between messages, allowing
durable implementations to cache safely only a bounded number of messages rather than
all that were ever sent, reducing both local resource use as well as network bandwidth
requirements. DDS can express such a design directly using standard QoS policies, and
the DDS-RTPS/UDP protocol stack can support these policies all the way down to the
network level.
• Graceful degradation: In some cases, if it’s not possible to deliver every message, it’s
best to deliver none of them. In other cases, graceful degradation is more desirable:
deliver as much data as possible within the reliability and timeliness requirements of the
applications involved, but allow other messages to be dropped in the interest of allowing
those applications to continue processing the most up-to-date information. Paradoxically,
a middleware that expects to be able to deliver everything over a network that can’t fulfill
that expectation will often end up delivering very little—and at great expense, as it
continually floods the network with negative acknowledgements and resends of messages
that were previously dropped by the network.
A data-centric message design enables graceful degradation by eliminating dependencies
among messages in the same stream. DDS provides standard QoS policies and a flexible
protocol that allow this design to be realized in a portable and interoperable way. These
policies allow administrators to specify the strict reliability guarantees some message
streams require. But they also allow more relaxed contracts when and where appropriate,
including dropping unacknowledged messages that have been superseded by a certain
number of subsequent messages, down-sampling rapid-fire streams, and so on.
4.3 Scalability
DDS networks such as that shown in Figure 8 above enable scalability in each portion of the
overall system.
Peers are lightweight. Each application participating in the local network needs to keep only an
in-memory cache of the recent updates to the data it is publishing or has subscribed to. It does
not need a traditional database or persistent storage. Furthermore, a single UDP socket can
communicate with an arbitrary number of remote peers, so IP resources are kept to a minimum.
—14 of 20—
15. Copyright 2011, Object Management Group (OMG). All Rights Reserved.
These efficiencies allow DDS implementations to run in embedded systems in addition to
enterprise-class workstations and servers.
Peer-to-peer networks are reliable and efficient. Peer-to-peer communication avoids artificial
bottlenecks and single points of failure. The DDS on-the-network format in particular is designed
to be highly compact, and multicast communication is supported (though not required).
Thousands of applications can communicate in a single network, exchanging hundreds of
thousands of data updates per second per producer-consumer pair, or many millions in aggregate.
(These same properties make DDS well suited for disadvantaged, limited-bandwidth, and/or
intermittent links.)
Wide-area networks require flexibility. Unlike TCP-based protocols, DDS-RTPS offers per-
channel quality-of-service control and avoids head-of-line blocking. These characteristics
improve performance and make the infrastructure’s performance more predictable, even over
challenging links. When connecting local and wide-area nodes, a broker can forward data and
shape traffic appropriately for each side.
4.4 Security
Secure messaging requires a comprehensive approach.
• Implementations must run on secure platforms to prevent errant code from exploiting the
network.
• Applications must communicate over secure transports to prevent unauthorized parties
from snooping their data.
• Data must remain confidential even when passing through brokers, persistence services,
and other infrastructure components.
• Data must be properly attributed such that recipients can understand from whence it
came.
• Middleware must enforce system-level access-control policies to limit the production and
consumption of data to authorized parties.
OMG has published a complete security roadmap for DDS. This document describes where the
specification is currently and where it is going.
4.4.1 DDS Security—In Production
Standard interception APIs for policy enforcement. The DDS API notifies applications when
remote peers attempt to communicate with them—to join the same network, publish data to the
application, or subscribe to data from the application. These notifications carry with them
metadata about the remote application, publication, or subscription—including, if desired,
security credentials. Applications then have the opportunity to reject communication with
unauthorized peers.
—15 of 20—
16. Copyright 2011, Object Management Group (OMG). All Rights Reserved.
Figure 9—DDS access control
Vendor-supplied secure transport. OMG is currently working with vendors on the
specification of an interoperable secure transport for DDS (see below). In the mean time, secure
transport support based on the IETF-standard TLS9 protocol (over TCP, or DTLS over UDP) is
available from the vendor community.
Deep packet inspection. In DDS, data types are discoverable and data formats are standardized,
allowing data updates to be introspected at run time by the infrastructure. This can be done
without or without the use of XML—there is no need to give up the compactness or performance
of binary data.
Secure operating-system support. DDS implementations run on secure enterprise platforms
such as SE Linux as well as secure partitioned operating systems such as VxWorks MILS.
4.4.2 DDS Security—In Progress
OMG is currently working with vendors on a comprehensive security specification that will take
DDS to the forefront of middleware and messaging technologies. This specification will address
the following scope:
• Interoperable secure transport, such as TLS, for security in transit
• Data-level tagging, signing, and encrypting for non-repudiation and confidentiality, even
as data updates traverse brokers or are persisted to disk
• Authentication and authorization at the domain (network) and topic level to enforce
system access-control policies
• A richer set of pluggable service-provider interfaces to allow users to integrate security-
enforcement mechanisms across multiple platforms and technologies in a system—
without locking themselves into a vendor-proprietary stack
OMG issued the RFP for this specification late last year and is currently processing initial
proposals. An adopted specification is expected late this year or early next year.
9
Transport-Layer Security (TLS) is the current name of the specification previously known as Secure Socket Layer
(SSL).
—16 of 20—
17. Copyright 2011, Object Management Group (OMG). All Rights Reserved.
Figure 10—Overview of the in-progress DDS security specification (source: OMG DDS Security RFP)
5 Conclusion
Interoperability based on open standards fosters the growth of a competitive marketplace to
empower innovation while driving down costs. A monolithic “common” infrastructure by itself
can achieve neither of those ends. The customer community, the vendor community, and
independent standards bodies like OMG must work together.
This is what OMG, the Navy, its integrators, and its supporting vendors have done around DDS
technology. DDS applies long-proven architectural principles in new ways to enable rapid
development and insertion of new capability, lower-risk system integration and evolution, and
more reliable operations. At the same time, multiple vendors actively compete for Navy
business, lowering acquisition and life-cycle costs.
6 Appendix: Technology Comparison
The following tables compare specifications and vendor implementations of several
technologies.
—17 of 20—
18. Copyright 2011, Object Management Group (OMG). All Rights Reserved.
6.1 Specification Comparison
DDS WS-N AMQP
Governing Body OMG W3C AMQP Working
Independent standards body; Independent Group
standards body; Industry consortium;
Open membership under published
rules Open membership Ad hoc membership
under published
rules
Participation 12+ (DDS), 12+ (C4I, A&D specs Unknown 12+;
atop DDS);
Weekly conference
Quarterly in-person calls (minutes available)
meetings (minutes available to
OMG members)
Vendors About 6+ About 3 About 4
Primary Defense (prod’n) Defense (prod’n) Finance (prod’n)
Adoption Communication (prod’n) Defense (dev’t)
SCADA/Industrial (prod’n) Transportation
(unknown)
Transportation (prod’n)
Finance (prod’n)
Integration Data-Centric Message- Message-Centric
Architecture Centric
Portable API DDS 1.2 (Java, C++, C); WSDL-based None;
JMS 1.1 (Java; vendor-specific) JMS 1.1 (Java; vendor-
specific)
Data/Message Formal Formal Informal or formal
Definition W3C XSD or OMG IDL W3C XSD AMQP-specific language
—18 of 20—
19. Copyright 2011, Object Management Group (OMG). All Rights Reserved.
DDS WS-N AMQP
Interoperable Real-Time Publish- SOAP 1.1, 1.2 AMQP 1-0r0
Protocol Subscribe 2.1 Transport: Pre-v1 release candidate;
Transport: UDP ucast & mcast; HTTP/TCP
Transport: TCP
Transport: TCP (in progress +
vendor-specific)
Throughput 10Ks–100Ks msgs/s10 10s msgs/s 100s–1Ks msgs/s11
(one-to-one)
Security AuthN/AuthZ interception Secure Secure transport;
pts; transport; AuthN/AuthZ
Improved AuthN/AuthZ (in Data signing, (vendor-specific)
progress); encryption
Secure transport (in progress +
vendor-specific);
Data tagging, signing,
encryption (in progress)
6.2 Vendor Comparison
IBM PrismTech Red Hat NCES
(R3) (OpenSplice) RTI (MRG-M) (JUM)
API JMS 1.1 DDS 1.2 (Java, DDS 1.2 (Java, Vendor- WSDL-
(Java) C, C++, .NET) C, C++, .NET, specific (Java, based
Ada); Python, C++,
.NET);
JMS 1.1
(Java) JMS 1.1
(Java)
10
Source: http://www.rti.com/products/dds/benchmarks-cpp-linux.html#MSGRATE. This data is presented for one-
to-one connections.
11
Source: “Reference Architecture Red Hat Enterprise MRG Messaging” whitepaper linked from
http://www.redhat.com/mrg/messaging/. This data is presented in aggregate across 60 applications. The test
methodology describes how to derive one-to-one measurements from it.
—19 of 20—
20. Copyright 2011, Object Management Group (OMG). All Rights Reserved.
IBM PrismTech Red Hat NCES
(R3) (OpenSplice) RTI (MRG-M) (JUM)
Protocol Real-Time Real-Time Real-Time AMQP 0-10 WS-N
Publish- Publish- Publish- Pre-v1
Subscribe Subscribe 2.1; Subscribe
2.1 RT- 2.1
Networking
(vendor-specific)
Enterprise Yes Yes Yes Yes No
Support
Option
License Comm’l Open Source; Comm’l Comm’l Comm’l
Comm’l Free for eval, Based on open-
IR&D; source Apache
(free for eval) qpid
Source avail for
purchase
Topology Brokered Peer-to-peer; Peer-to-peer; Brokered Brokered
Brokered Brokered
Redundancy Unknown Hot producer Hot Clustered Unknown
fail-over producer brokers
Per DDS fail-over Vendor-specific
Ownership spec Per DDS
Ownership spec
Persistence Broker Per-node Broker; Broker Broker
Location daemon Standalone
service
Data Caching None Yes Yes None None
In-memory; In-memory;
Persistent Persistent
—20 of 20—