1.
Mälardalen
University
School
of
Innovation,
Design
and
Engineering
November
2014
Master
thesis
High
level
design
tool
integration:
the
Orchestrator
_____________________________________
Jelena
Matić
Darko
Šupe
Supervisor:
Gaetana
Sapienza
Examiner:
Ivica
Crnković
2.
Abstract
____________________________________________________________
High
level
design
tool
integration:
the
Orchestrator
Today,
developers
of
complex
embedded
systems
are
using
a
number
of
development
tools
connected
in
a
tool
chain,
which
do
not
work
together
effectively.
Our
idea
is
to
develop
a
new
component
called
the
Orchestrator
and
integrate
it
into
the
existing
iFEST
integration
framework.
The
Orchestrator
would
control
all
the
communication
between
the
tools,
that
is,
it
would
represent
the
interface
between
tools.
This
thesis
deals
with
tool
integration
using
the
Open
Services
for
Lifecycle
Collaboration
(OSLC)
as
a
core
technology.
The
implementation
of
the
Orchestrator
is
described
to
show
how
the
mentioned
mechanism
works.
The
analysis
of
the
implementation
proved
that
the
concept
is
worth
of
further
development
and
showed
that
the
OSLC
technology
can
be
used
to
ease
the
integration
process,
although
it
is
still
not
widely
used.
School
of
Innovation,
Design
and
Engineering
Address:
Högskoleplan
1
Västerås
Sweden
Telephone:
+46
(0)
21-‐10
13
00
+46
(0)
16-‐15
36
00
Web
address:
www.mdh.se
3. Acknowledgements
Acknowledgements
Firstly,
we
would
like
to
express
our
deepest
gratitude
to
our
examiner
at
MDH
Ivica
Crnković
who
gave
us
the
opportunity
to
develop
our
thesis
in
cooperation
with
ABB.
Further,
we
would
like
to
express
gratitude
to
our
supervisor
at
Mälardalen
Högskola
Gaetana
Sapienza
for
great
guidance
throughout
our
master
thesis.
Besides
our
supervisor
at
MDH,
we
would
also
like
to
thank
our
supervisors
at
ABB
Corporate
Research
Center
Tiberiu
Seceleanu
and
Morgan
E.
Johansson
for
giving
us
useful
feedback.
Special
thanks
to
Jad
El-‐Khoury
from
KTH,
who
gave
us
a
lot
of
valuable
suggestions
during
the
development
of
the
Orchestrator.
In
addition,
we
would
like
to
thank
Maja
Štula,
a
professor
at
the
Faculty
of
Electrical
Engineering,
Mechanical
Engineering
and
Naval
Architecture
of
the
University
of
Split,
for
giving
her
best
suggestions
and
believing
in
us.
We
would
also
like
to
thank
our
family
for
sending
us
lots
of
love.
Special
thanks
to
Maja
Matić,
for
her
efforts
in
proofreading
our
thesis.
4. Acronyms
Acronyms
EA
Enterprise
Architect
HTTP
Hyper
Text
Transfer
Protocol
IF
Integration
Framework
iFEST
Industrial
Framework
for
Embedded
Systems
Tools
OSLC
Open
Services
for
Lifecycle
Collaboration
RDF
Resource
Description
Framework
REST
Representational
State
Transfer
SDK
Software
Development
Kit
SOA
Service
Oriented
Architecture
SOAP
Simple
Object
Access
Protocol
UI
User
Interface
URI
Uniform
Resource
Identifier
WWW
World
Wide
Web
5. Contents
Contents
1
Introduction
....................................................................................................................
1
1.1
Motivation
........................................................................................................................
2
1.2
Objective
...........................................................................................................................
2
1.3
Research
questions
...........................................................................................................
3
1.4
Contribution
......................................................................................................................
3
1.5
Research
Methodology
.....................................................................................................
4
1.6
Thesis
structure
.................................................................................................................
4
2
State
of
the
Art
................................................................................................................
5
2.1
Tool
integration
paradigms
...............................................................................................
5
2.2
Tool
integration
definition
................................................................................................
7
2.3
Related
work
.....................................................................................................................
9
2.3.1
The
Jazz
Integration
Approach
................................................................................
9
2.3.2
ModelBus
Approach
..............................................................................................
10
2.3.3
Atego
Workbench
..................................................................................................
11
3
Technical
background
....................................................................................................
12
3.1
Service
oriented
architecture
.........................................................................................
12
3.2
Representational
State
Transfer
.....................................................................................
14
3.3
Resource
Description
Framework
...................................................................................
16
3.3.1
SPARQL
..................................................................................................................
17
3.3.2
IRI
...........................................................................................................................
17
3.3.3
Literals
...................................................................................................................
17
3.3.4
Blank
nodes
...........................................................................................................
17
4
Open
Services
for
Lifecycle
Collaboration
......................................................................
18
4.1
Introduction
to
OSLC
.......................................................................................................
18
4.2
Core
Concept
..................................................................................................................
19
4.3
Basic
OSLC
integration
techniques
..................................................................................
21
4.3.1
Linking
data
via
HTTP
.............................................................................................
22
4.3.2
Linking
Data
via
HTML
User
Interface
...................................................................
22
5
The
iFEST
Tool
Integration
Framework
...........................................................................
23
5.1
iFEST
Architecture
...........................................................................................................
24
5.2
iFEST
concept
..................................................................................................................
25
6
Orchestrator
..................................................................................................................
28
6.1
Architecture
....................................................................................................................
28
6. Contents
6.1.1
OSLC
Core
specification
.........................................................................................
29
6.1.2
Service
catalogue
...................................................................................................
29
6.1.3
Core
services
.........................................................................................................
30
6.1.4
Repository
.............................................................................................................
32
6.1.5
Graphical
User
Interface
........................................................................................
32
6.1.6
Delegated
User
Interface
.......................................................................................
38
6.2
Implementation
..............................................................................................................
39
6.2.1
Construction
of
OSLC-‐compliant
Tool
Adaptors
...................................................
40
6.2.2
Construction
of
the
Orchestrator
Adaptor
............................................................
45
6.2.3
Database
................................................................................................................
51
6.2.4
User
Scenarios
.......................................................................................................
53
7
Development
Environment
............................................................................................
67
7.1
Eclipse
IDE
.......................................................................................................................
67
7.1.1
Eclipse
Lyo
.............................................................................................................
67
7.1.2
eGIT
.......................................................................................................................
68
7.1.3
Maven
....................................................................................................................
68
7.1.4
Acceleo
..................................................................................................................
70
7.1.5
WindowBuilder
......................................................................................................
70
7.2
Apache
Derby
..................................................................................................................
70
8
Future
work
...................................................................................................................
71
8.1
Versioning
.......................................................................................................................
71
8.2
Orchestrator
Configuration
Window
..............................................................................
76
8.3
The
improvements
to
the
code
generator
......................................................................
77
9
Conclusion
.....................................................................................................................
79
Bibliography
.........................................................................................................................
81
Appendices
...........................................................................................................................
84
Appendix
A.
Installation
Guide
.............................................................................................
84
Appendix
B.
List
of
figures
.....................................................................................................
85
Appendix
C.
List
of
tables
......................................................................................................
86
7. 1
Introduction
1
1 Introduction
Nowadays
system
development
projects
use
various
tools
from
different
vendors
in
order
to
support
the
process.
Their
complexity
increases
with
time,
which
means
that
the
lifecycle
process
gets
more
complicated
as
well.
By
easing
the
communication
between
those
tools,
products
can
be
developed
more
quickly
and
with
less
effort.
In
the
existing
implementations,
tools
communicate
directly
to
each
other
(Figure
1.1)
using
their
adaptors,
i.e.
they
communicate
in
pairs.
This
implies
that
a
pair
of
tools
has
a
certain
common
standard
they
follow
in
order
to
exchange
data.
In
case
we
want
to
add
a
new
tool
to
the
environment,
those
tools
must
be
adapted
so
they
can
communicate
with
the
new
one.
With
every
change
inside
one
tool,
other
tools
must
adapt
to
changes
as
well.
This
approach
is
not
adequate
since
it
takes
an
enormous
amount
of
time
and
effort
for
that
kind
of
the
environment
to
function
properly.
By
standardising
the
communication
between
the
adaptors,
new
tools
can
be
introduced
to
the
framework
without
great
effort.
Figure
1.1:
Point-‐to-‐point
communication
8. 1
Introduction
2
1.1
Motivation
The
motivation
behind
this
thesis
is
to
make
the
tool
integration
process
quicker
and
less
expensive.
The
existing
iFEST
[14]
integration
framework
developed
by
ABB
uses
the
Open
Service
for
Lifecycle
Collaboration
(OSLC)
technology
[22]
to
standardise
the
communication
between
the
tools
in
the
framework.
By
using
a
common
standard,
tools
can
be
integrated
more
easily.
This
solution,
however,
has
one
problem
–
tools
communicate
in
pairs.
Since
their
communication
is
not
organised,
it
cannot
be
tracked.
In
order
to
solve
this
issue,
a
component,
called
the
Orchestrator,
should
be
added
to
the
existing
iFEST
integration
framework.
The
Orchestrator
would
control
the
cooperation
of
tools
during
the
life-‐cycle
process
of
an
embedded
system
product.
It
would
serve
as
a
common
interface
to
all
other
tools
and
manage
all
communication
inside
the
framework.
Figure
1.2:
Communication
through
the
framework
1.2
Objective
The
thesis
is
divided
into
two
parts.
In
the
first
part,
we
provide
a
theoretical
framework
where
we
state
that
the
tool
integration
and
the
OSLC
standard
represent
an
accepted
approach
in
the
area
of
tool
integration.
The
theoretical
research
is
based
on
previous
studies
in
this
area,
as
well
as
on
published
articles
listed
under
Bibliography,
and
specification
provided
by
ABB
CRC.
The
offered
specification
for
the
implementation
of
an
OSLC-‐compliant
tool,
called
the
Orchestrator,
is
not
fully
completed,
but
it
outlines
the
basic
concept.
In
the
second
part
of
this
master
thesis,
we
present
a
practical
implementation
of
the
Orchestrator
and
Tool
Adaptors.
9. 1
Introduction
3
1.3
Research
questions
The
main
area
of
research
is
divided
into
a
set
of
three
questions
which
need
to
be
answered
in
order
to
get
a
full
insight
on
how
various
tools
can
communicate
through
the
Orchestrator.
Research
Question
1:
Is
it
possible
to
develop
the
Orchestrator
following
the
OSLC
standard?
Research
Question
2:
How
will
the
communication
through
the
Orchestrator
differ
from
a
traditional
tool2tool
communication
and
what
benefits
will
the
communication
via
the
Orchestrator
bring?
Research
Question
3:
Will
it
be
more
valuable
to
develop
versioning
system
through
the
OSLC
Configuration
Management
specification
instead
of
traditional
development
through
the
central
repository?
1.4
Contribution
In
this
thesis,
we
proposed
and
explained
the
practical
implementation
of
the
Orchestrator.
The
Orchestrator
was
defined
as
the
central
tool
that
would
control
the
cooperation
of
tools
during
the
life-‐cycle
process
of
an
embedded
system
product
and
manage
all
communication
inside
the
framework.
In
regard
to
the
research
questions
stated
above,
our
contribution
is:
1. We
have
proved
that
it
is
possible
to
develop
the
Orchestrator
as
the
OSLC-‐compliant
tool.
As
the
starting
point
of
our
research,
we
developed
a
pilot
project
where
our
implementation
deviated
from
the
OSLC
standard,
but
in
the
end
that
realisation
led
us
to
the
right
approach.
Once
the
Orchestrator
core
resources
were
identified,
it
was
easy
to
generate
the
adaptor
skeleton
by
using
Lyo
Code
Generator
[15].
2. Instead
of
traditional
tool2tool
communication,
we
have
proposed
the
communication
through
the
Orchestrator.
The
development
of
the
Orchestrator
provides
common
framework
through
which
all
communication
between
tools
is
done.
This
solution
is
more
efficient
compared
to
tool2tool
communication
and
in
the
future
it
will
provide
the
possibility
for
selecting
hardware
or
software
solutions
and
changing
the
decision
later
with
minimal
re-‐design
effort.
3. Instead
of
traditional
central
repository,
we
propose
the
development
of
the
versioning
system
that
follows
the
OSLC
Configuration
Management
specification.
In
that
way,
the
Resource
Description
Framework
(RDF)
representation
of
artefacts
can
only
be
exchanged.
The
tool
with
corresponding
adaptors
will
be
able
to
fetch
representations
and
convert
them
back
to
the
real
artefacts,
as
described
in
Section
8.1.
10. 1
Introduction
4
1.5
Research
Methodology
The
main
objective
of
the
thesis
was
to
investigate
possibility
of
tools
communication
using
a
common
orchestration
adapter.
Our
research
includes
the
following
steps:
1. analysis
of
existing
literature
in
the
area
of
tool
integration
2. finding
a
solution
based
on
theoretical
research
and
practical
efforts
3. construction
(design
and
implementation)
of
a
prototype
that
provides
specified
solution
4. feasibility
demonstration
of
the
implemented
tool
using
different
user
scenarios
The
process
(steps
2-‐4)
was
performed
in
several
iterations
to
achieve
an
acceptable
level
of
the
feasibility.
Our
research
was
based
on
the
existing
Open
Services
for
Lifecycle
Collaboration
(OSLC)
[22]
standard
for
tool
integration.
The
components
developed
for
the
verification
of
our
research
were
two
tools
called
Tool
A
and
Tool
B
and
the
Orchestrator
component
which
were
made
from
scratch.
1.6
Thesis
structure
This
thesis
report
is
structured
as
follows:
In
Section
2,
the
state
of
the
art
in
tool
integration
is
presented.
In
the
first
part,
the
concept
of
tool
integration
is
defined
based
on
an
existing
bibliography.
In
the
second
part,
already
available
solutions
in
the
area
of
tool
integration
are
described.
In
Section
3,
the
service
–
oriented
approach
and
all
the
techniques
that
have
been
used
in
this
thesis
are
presented.
In
Section
4,
a
brief
description
of
the
OSLC
standard
in
terms
of
a
standard
which
is
used
for
standardisation
of
the
communication
between
the
tools
in
the
existing
framework
is
given.
In
Section
5,
based
on
existing
literature,
the
iFEST
integration
framework
is
defined
as
a
general
tool
integration
framework
for
hardware-‐software
co-‐design
of
heterogeneous
and
multi-‐code
embedded
systems.
In
Section
6,
the
Orchestrator
architecture
is
proposed
and
the
detailed
implementation
of
the
OSLC
Orchestrator
adaptor
is
presented,
as
well
as
of
the
OSLC-‐compliant
Tool
adaptors.
In
Section
7,
environment
used
throughout
the
development
phase
is
described.
In
Section
8,
possible
future
work
is
discussed
and
new
ideas
for
the
development
of
the
versioning
system
are
presented.
In
Section
9,
conclusions
are
drawn
about
the
possibility
to
have
a
central
component
managing
the
tools
in
the
framework.
11. 2
State
of
the
Art
5
2 State
of
the
Art
Every
product
has
its
lifecycle
which
begins
with
the
concept
and,
as
presented
in
this
thesis,
ends
with
the
finished
product
entering
the
market.
In
every
case,
time
spent
on
transferring
the
concept
into
the
final
product
has
a
crucial
role,
observed
from
financial
and
many
other
aspects.
The
main
goal
of
every
project
is
to
get
the
product
pushed
to
the
market
in
the
shortest
amount
of
time
possible
while
keeping
its
quality
at
high
level.
Today,
there
are
many
software
tools
which
were
developed
to
support
the
lifecycle
process
and
shorten
that
time.
For
example,
there
are
tools
for
managing
the
list
of
user
requirements
and
tools
for
modelling.
By
integrating
these
tools,
the
time
to
the
market
drastically
shortens.
What
does
actually
mean
to
integrate
tools?
We
say
that
the
tools
are
integrated
provided
they
can
exchange
data.
Thus,
requirements
can,
for
instance,
be
sent
to
the
modelling
tool.
The
user
of
the
modelling
tool
can
use
those
requirements
and
make
corresponding
models.
The
requirement
is
bound
to
the
resulting
model
and
can
be
tracked
throughout
the
whole
lifecycle
process.
There
are
many
more
tools
which
can
be
integrated,
e.g.
tools
for
validation
of
the
user
code
by
running
test
cases,
tools
for
checking
the
syntax
of
the
code,
etc.
So
far,
tool
integration
has
not
been
standardised,
and
tools
are
being
integrated
in
different
ways.
Tool
integration
[27]
can
be
designed
on
the
same
platform,
by
creating
an
integrated
development
environment.
In
that
way,
tools
follow
the
platform’s
guidelines
and
use
common
components.
Nevertheless,
this
approach
does
not
provide
enough
flexible
data
used
for
integration
of
tools
and
it
is
not
suitable
solution
for
tools
built
on
different
platforms.
As
their
complexity
increases,
integration
of
tools
using
traditional
platforms
becomes
less
maintainable.
As
an
alternative
solution,
the
Open
Services
for
Lifecycle
Collaboration
(OSLC)
[27,
22]
has
been
proposed.
The
OSLC
enables
tools
to
cooperate
uniformly
by
sharing
their
data.
The
specification
introduces
Linked
Data,
described
in
Section
4,
as
a
primary
technique
for
tools
integration.
In
the
following
Section,
we
provide
explanations
for
two
different
tool
integration
paradigms
and
the
tool
integration
definition
by
Wasserman
and
Thomas
and
Nejmeh.
In
Section
2.2,
Jazz
Integration
Approach,
ModelBus
approach
and
Atego
Workbench
are
described.
2.1
Tool
integration
paradigms
Every
system
development
process
is
composed
of
a
number
of
clearly
defined
phases
which
use
different
tools.
Therefore,
various
tools
can
be
integrated
using
the
framework
integration
or
point-‐to-‐point
integration.
There
are
different
definitions
of
the
tool
integration
in
the
12. 2
State
of
the
Art
6
literature,
but
most
common
definition
is
that
the
integration
represents
relationship
between
tools
and
other
elements
in
the
environment.
In
the
point-‐to-‐point
integration,
tools
communicate
directly
with
each
other.
The
number
of
interconnections
with
N tools
is
given
by
equation 2/)1( −NN .
Hence,
for
five
tools
displayed
in
Figure
2.1,
ten
interconnections
need
to
be
implemented.
Figure
2.1:
Point-‐to-‐point
tool
integration
In
practice,
this
often
means
that
different
adapters
need
to
be
implemented.
Every
tool
must
be
able
to
convert
its
internal
data
into
the
format
which
can
be
read
by
some
other
tool.
The
main
problem
is
the
number
of
connections
between
tools,
and
number
of
implemented
adaptors,
since
the
standard
for
exchanging
the
data
is
not
used.
On
the
other
hand,
the
framework
integration
decreases
the
number
of
interconnections
and
reduces
the
complexity
of
implementation.
Tools
are
connected
to
a
common
framework
and
communicate
through
it,
as
it
is
shown
in
Figure
2.2.
Figure
2.2:
Framework
tool
integration
13. 2
State
of
the
Art
7
2.2
Tool
integration
definition
One
of
the
earliest
articles
on
the
tool
integration
is
written
by
Wasserman
[1].
He
identifies
five
types
of
integration:
platform
integration,
presentation
integration,
data
integration,
control
integration
and
process
integration.
Platform
integration
deals
with
common
framework
services
used
by
tools.
The
basic
issue
for
an
integrated
solution
is
that
the
different
tools
must
be
interoperable.
Presentation
integration
is
related
to
user
interaction.
New
tool
users
are
capable
to
make
certain
assumptions
about
how
to
use
the
tool
and
how
will
it
operate
to
defined
inputs.
Data
integration
is
concerned
with
the
interchange
of
data
between
tools.
Every
tool
can
insert,
modify
and
access
data
produced
by
other
tools
through
shared
repository.
Control
integration
deals
with
the
interoperability
of
tools
in
the
way
that
each
tool
has
possibility
to
influence
the
behaviour
of
other
tool.
Process
integration
is
concerned
with
the
role
of
tools
within
the
entire
software
process.
The
process
management
tool
is
used
to
manage
and
monitor
the
other
tools
in
the
framework.
Wasserman
concludes
that
the
key
issue
within
the
tool
integration
is
the
ability
to
establish
the
proper
level
of
integration
and
to
find
a
set
of
conforming
tools
for
the
chosen
level.
In
addition,
Wasserman
defines
three-‐
dimensional
space
of
tool
integration,
as
it
is
shown
in
Figure
2.3,
where
the
axes
of
the
graph
illustrate
presentation,
data
and
control
integration.
It
is
evident
that
tools
T1
and
T2
in
the
following
graph
cannot
be
integrated
because
they
do
not
agree
in
any
of
three
dimensions.
Here,
the
key
issue
of
tool
integration
becomes
visible:
minimal
tool
integration
requires
a
set
of
tools
to
agree
on
at
least
one
dimension
and
effective
tool
integration
requires
a
set
of
tools
to
agree
on
all
three
dimensions.
Wasserman
ends
by
suggesting
that
all
tools
need
to
be
implemented
using
layered
approach
and
that
integration
needs
to
be
managed
through
standards
where
all
tool
manufacturers
must
agree
on
common
mechanism.
Figure
2.3:
Three
dimensional
space
of
tool
integration
[1]
14. 2
State
of
the
Art
8
Thomas
and
Nejmeh,
based
on
Wasserman´s
work,
[2]
give
an
accurate
answer
to
the
question
what
tool
integration
means
by
extending
relationships
between
tools.
They
describe
tool
integration
as
a
property
of
tool
relationships
with
other
tools
in
the
environment.
Tool
integration
considers
the
extent
to
which
tools
agree,
and
understanding
it
will
help
us
design
better
tools
and
integration
mechanisms.
In
their
approach,
they
discard
platform
integration
and
expand
Wasserman’s
three-‐dimensional
space
with
process
integration,
as
shown
in
Figure
2.4:
• presentation
integration
–
to
improve
the
effectiveness
of
the
client’s
cooperation
with
the
environment,
authors
identify
properties
of
appearance
and
behaviour
• data
integration
–
data
consistency,
interoperability,
synchronization,
non-‐
redundancy
and
similar
properties
are
established
to
allow
users
to
easily
share
data
between
tools
and
to
recognize
the
level
of
redundant
data
• control
integration
–
to
grant
the
adjustable
combination
of
environment
functions,
properties
of
appropriate
“provision
and
use”
are
identified
• process
integration
–
to
ensure
that
tools
collaborated
adequately
in
support
of
a
defined
process,
authors
identify
the
property
of
a
process
step,
process
event
and
process
constraint.
Figure
2.4:
Types
of
tool
relationship
[2]
15. 2
State
of
the
Art
9
2.3
Related
work
This
Section
describes
similar
tool
integration
approach,
such
as
JAZZ
Foundation,
ModelBus
and
Atego
Workbench.
Here,
we
shall
state
the
pros
and
cons
of
each
approach,
and
explain
the
basic
structure
of
their
frameworks
as
well
as
all
similarities
with
iFEST
approach.
2.3.1
The
Jazz
Integration
Approach
The
Jazz
Platform
[16,
17]
is
one
of
the
modern
tool
integration
platforms
built
on
OSLC.
Jazz
platform
is
a
team
collaboration
tool
by
itself
and
it
starts
with
the
needs
of
team
collaboration
and
project
management
in
software
development.
It
comprises
architecture
and
a
set
of
application
frameworks,
toolkits
and
building
blocks.
The
Jazz
Integration
Architecture
empowers
different
tools
to
be
used
together
in
an
integrated
environment.
Different
tools
can
share
data
with
similar
functionalities
using
OSLC
resources,
separating
implementation
of
tools
from
the
definition
of
the
data.
Jazz
architecture
is
based
on
two
basic
concepts:
Linked
Data,
which
use
standard
interfaces
and
methods
to
establish
links
between
data,
and
Integration
services,
which
provides
capabilities
for
all
lifecycle
tools.
Linked
data
concept
is
built
on
Open
Service
for
Lifecycle
Management
(OSLC)
approach
which
consists
of
Providers
and
Consumers.
An
OSLC
Provider
represents
a
tool
that
exposes
data
to
other
tools
as
described
in
OSLC
specification,
and
an
OSLC
Consumer
is
a
tool
that
approaches
other
tool
data
through
its
interface.
The
central
part
of
Jazz
Integration
Architecture
is
Jazz
Team
Server
providing
core
services
which
enable
different
tools
to
work
together.
All
core
and
specific
services
are
implemented
as
RESTful
web
services
where
all
communication
between
tools
goes
through
REST
API
using
standard
resource
definition.
The
Jazz
Platform
is
designed
to
support
integration
of
different
tasks
across
the
software
lifecycle,
to
simplify
team
collaboration
and
coordination,
to
support
globally
distributed
development
teams,
and
to
facilitate
teams
to
build
software
more
effectively.
Figure
2.5:
Jazz
Team
Server
[16]
16. 2
State
of
the
Art
10
Following
the
principles
of
WWW,
like
RESTful
resources
and
semantic
data
in
RDF,
Jazz
developers
built
robust
integration
between
different
tools
and
produced
a
set
of
tools
which
provide
better
traceability
across
the
lifecycle
of
application
[17].
Today,
Jazz
represents
an
outstanding,
scalable
and
extensible
team
collaboration
platform
with
full
integration
support
of
IBM´s
Rational
products.
Furthermore,
an
open
community
is
built,
thus,
making
it
possible
to
see
Jazz
–
based
products.
Therefore,
with
the
development
of
community,
Jazz
will
perhaps
become
the
industrial
standard
for
team
collaboration
and
coordination.
2.3.2
ModelBus
Approach
ModelBus
[7]
is
an
integration
solution
based
on
the
Web
services
providing
services
such
as
model
storage
and
version
management.
ModelBus
discusses
everyday
problems
in
software
development
process,
e.g.
inconsistencies
between
artefacts,
small
amount
of
automation,
deficient
common
terminology
and
complexity
of
the
systems.
The
key
concept
for
tool
interoperability
is
the
existence
of
virtual
bus
and
the
way
it
processes
the
data
transmitted
via
the
bus.
ModelBus
communication
platform
associates
different
services
provided
by
tools
connected
to
ModelBus.
Its
architecture
is
built
on
SOAP
web
services,
following
SOA
approach.
ModelBus
offers
a
service
registry
and
notification
service
as
core
services
and,
by
using
the
Orchestrator
tools,
automatically
executed
workflows
can
be
defined.
ModelBus
contains
one
central
repository,
called
Model
Storage,
for
sharing
the
models
between
various
tools
and
providing
the
traceability
of
different
models.
Additionally,
it
contains
verification
service
which
is
responsible
for
verification
of
intermediate
modelling
results
and
transformation
service
which
can
be
used
to
transform
the
results
produced
by
one
tool
to
be
adaptable
for
purposes
of
another
tool.
Figure
2.6:
ModelBus
Approach
[7]
17. 2
State
of
the
Art
11
In
the
modelling
context,
ModelBus
is
one
of
the
most
important
approaches
built
on
transformations
and
models
in
order
to
create
a
tool
chain.
ModelBus
does
not
provide
any
support
to
simplify
the
development
of
tool
adaptor
and
gives
only
basic
support
for
other
integration
efforts.
Furthermore,
all
providing
services
are
developed
directly
on
application
servers
what
implies
the
complexity
of
managing
services
of
the
tool
adaptors.
2.3.3
Atego
Workbench
Atego
Workbench
[26]
is
another
tool
integration
technology
built
on
thin
client
architecture
which
enables
collaboration,
information
sharing
and
traceability
between
different
tools.
As
opposed
to
the
Model
Bus
approach,
all
providing
services
are
running
on
the
servers.
Atego
provides
rich
User
Interface
where
users
can
update
different
artefacts
or
configure
a
set
of
user
role
based
access
to
a
number
of
tools,
etc.
It
provides
the
ability
to
manage
the
reusability
of
artefacts
within
and
across
projects
while
reducing
the
costs
of
the
development.
Additionally,
Atego
eliminates
the
need
for
point
to
point
integration
by
using
fully
integrated,
collaborative
deployment
framework.
The
framework
contains
one
common
multi-‐user
repository
for
sharing
various
data,
and
it
is
built
on
Configuration
Management.
This
integration
solution
is
good
for
integrating
some
of
known
applications
to
users
because
it
allows
them
to
spend
more
time
engineering
and
less
time
on
administration,
but
still
it
does
not
provide
general
tool
integration
solution.
18. 3
Technical
background
12
3 Technical
background
3.1
Service
oriented
architecture
A
service-‐oriented
architecture,
abbreviated
SOA,
is
essentially
a
collection
of
autonomous
interoperable
services.
Services
are
units
of
functionality
with
unambiguously
defined
and
implementation
independent
interfaces.
These
services
can
either
pass
simple
data
to
each
other
or
coordinate
some
activity.
Each
service
interaction
is
independent
in
regard
to
other
interactions
and
protocols
of
the
communicating
devices.
Due
to
its
platform
independence,
clients
from
any
devices
using
any
operating
system
can
use
the
service.
SOA
provides
great
reusability
in
a
way
that
every
client
can
use
arbitrary
number
of
units
of
functionality.
Every
service
that
functions
as
a
part
of
SOA
needs
to
fulfil
the
requirements
shown
in
the
following
table.
Table
3.1:
Service
requirements
Requirement
Description
Statelessness
Separate
services
from
their
state
data
whenever
it
is
possible
High
cohesion
Strengthen
the
bonds
between
the
responsibilities
of
a
single
component
Loose
coupling
Decrease
dependencies
between
components
Local
transparency
Every
consumer
can
invoke
a
service
regardless
of
its
location
inside
the
network
Document-‐oriented
It
is
necessary
to
send
data
between
services
as
documents
Metadata-‐driven
Services
are
complemented
with
metadata
by
which
they
can
be
properly
described
and
organised
SOA
is
usually
built
using
different
web
services
standards,
such
as
SOAP,
REST,
etc.
OSLC
(Open
Services
for
Lifecycle
Collaboration),
described
in
Section
4,
follows
the
REST
architectural
pattern
which
depicts
principles
by
which
data
can
be
transmitted
over
the
HTTP
protocol.
By
using
Representational
State
Transfer,
beneficial
properties
of
services,
like
performance
and
scalability,
are
increased.
19. 3
Technical
background
13
Figure
3.1:
SOA
Conceptual
Model
SOA
relies
on
three
entities:
Service
Consumer,
Service
Provider
and
Service
Registry.
The
Service
Consumer
consumes
the
service,
i.e.
the
functionality
the
service
offers.
Different
functions
within
the
service
are
called
for
different
functionalities.
The
consumer
can
directly
call
the
service
provided
he/she
knows
its
location.
Provided
that
the
service
location
is
not
known,
the
consumer
can
look
up
its
location
in
the
Service
Registry.
The
Service
Provider
accepts
and
executes
request
received
from
the
consumer.
It
contains
service
implementation
and
its
description.
The
Service
Registry
is
a
network
directory
which
contains
addresses
from
all
the
available
services
offered
by
service
providers.
It
stores
and
publishes
service
contracts
of
different
providers,
and
provides
them
to
interested
service
consumers.
A
Service
Contract
describes
the
way
for
the
service
consumer
to
interact
with
the
service
provider
in
order
to
use
the
service
provided.
It
contains
information
about
the
message
format,
quality
of
service
and
conditions
that
need
to
be
fulfilled
before
the
service
can
be
consumed.
SOA
has
many
advantages
that
can
be
useful
both
in
entrepreneurial
business
and
IT
areas.
Some
of
its
advantages
are
business
decision
improvements,
employee
productivity
improvements,
easy
integration
of
customer
and
supplier
systems,
more
productive,
flexible,
manageable
and
secure
applications
and
smaller
developing
expenses.
20. 3
Technical
background
14
3.2
Representational
State
Transfer
Representational
State
Transfer
[25],
in
further
text
REST,
is
an
architectural
style
which
consists
of
a
coordinated
set
of
architectural
constraints
applied
to
components,
connectors
and
data
elements,
within
a
distributed
hypermedia
system.
Hypermedia
is
an
extension
of
the
term
hypertext.
It
is
a
nonlinear
medium
of
information
which
includes
graphics,
audio,
video,
plain
text
and
hyperlinks.
A
nonlinear
medium
is
intended
to
be
accessed
in
a
nonlinear
fashion,
for
instance
newspapers
which
contrast
with
the
term
multimedia,
the
superset
of
hypermedia.
Firstly,
REST
is
used
to
describe
desired
web
architecture,
but
its
other
application
is
important
for
our
purposes,
and
that
is
the
development
of
web
services.
REST
is
an
alternative
to
other
distributed-‐computing
specifications
such
as
SOAP.
A
web
service
is
considered
to
be
RESTful
if
it
conforms
to
the
REST
constraints:
1) Client-‐server:
There
must
be
a
uniform
interface
that
separates
clients
from
servers.
Clients
must
not
be
concerned
with
the
server
duties,
such
as
data
storage,
which
remains
internal
to
the
server.
In
that
way,
the
portability
of
client
code
is
highly
improved.
On
the
other
hand,
servers
are
not
concerned
with
the
user
interface
or
user
state.
Thus,
servers
are
simpler
and
more
scalable.
Clients
and
servers
can
also
be
developed
independently,
as
long
as
the
interface
between
them
is
not
altered.
2) Stateless:
Client
data
is
not
being
stored
on
the
server
between
requests.
Every
request
contains
all
the
necessary
data,
while
the
session
state
is
held
in
the
client.
The
session
state
can
be
transferred
through
the
server
to
another
service,
for
instance
a
database,
to
maintain
a
persistent
state
for
a
particular
period
of
time
and
allow
authentication.
3) Cacheable:
Clients
must
be
able
to
cache
responses.
Responses
must
define
themselves
as
cacheable
or
not,
so
to
prevent
clients
from
reusing
them
for
further
requests.
Caching
improves
scalability
and
performance
by
eliminating
some
client-‐server
interactions.
4) Layered
system:
The
communication
between
clients
and
servers
may
go
through
several
intermediary
servers.
Intermediary
servers
improve
system
scalability
by
enabling
load-‐balancing
and
providing
shared
caches.
This
is
beneficiary
for
managing
the
security
issues.
5) Code
on
demand
(optional
constraint):
Servers
can
sometimes
extend
the
functionality
of
the
client
by
transferring
executable
codes.
Those
can
be
Java
applets
and/or
client-‐
side
scripts
such
as
JavaScript.
This
is
an
optional
constraint.
6) Uniform
interface:
This
constraint
is
fundamental
to
any
REST
service.
Interface
makes
it
possible
for
clients
and
servers
to
be
developed
independently.
There
are
four
guiding
principles
to
this
interface:
a. Identification
of
resources:
Every
resource
is
uniquely
identified,
for
example
using
URIs.
The
resources
are
separated
from
their
representations.
b. Manipulation
of
resources
through
these
representations:
Representation
of
the
resource
is
sufficient
for
the
client
to
make
changes
to
the
resource
itself.
c. Self-‐descriptive
messages:
Each
message
contains
all
the
relevant
information
on
processing
the
message.
21. 3
Technical
background
15
d. Hypermedia
as
the
engine
of
application
state:
the
client
makes
state
transitions
only
through
actions
previously
defined
by
the
server.
The
client
does
not
assume
that
any
particular
action
is
available
for
any
particular
resource
beyond
those
described
in
representations
previously
received
from
the
server.
If
a
service
violates
any
of
the
constraints
described
above,
except
the
optional
one,
it
cannot
be
considered
RESTful.
Therefore,
web
service
APIs
that
adhere
to
the
constraints
are
RESTful.
Main
aspects
of
this
type
of
a
web
service
are:
• Base
URI,
for
example
http://website.com/artifacts/
• An
Internet
media
type
for
data.
The
Internet
media
type
is
an
identifier
used
on
the
Internet
to
indicate
the
type
of
data
that
a
file
contains.
The
media
type
used
in
this
work
is
the
Resource
Description
Framework
(RDF)
which
is
described
in
the
following
chapter.
• Standard
HTTP
methods
–
GET,
PUT,
POST
and
DELETE
• Hypertext
links
to
reference
state
• Hypertext
links
to
related
resources
REST
service
uses
resources
which
are
uniquely
identified
by
their
URIs,
in
other
words
it
uses
their
representations
and
manages
those
using
standard
HTTP
methods.
The
table
below
shows
what
specific
HTTP
methods
do
when
executed
on
the
URI
of
the
collection
of
resources,
for
example
http://website.com/artifacts/.
Table
3.2:
Collection
URI’s
HTTP
methods
[25]
GET
PUT
POST
DELETE
List
the
URIs
and
possibly
other
details
of
the
collection's
members.
Replace
the
entire
collection
with
another
one.
Create
a
new
entry
in
the
collection.
The
new
entry's
URI
is
assigned
automatically
and
is
usually
returned
by
the
operation.
Delete
the
entire
collection.
If,
on
the
other
hand,
we
execute
the
methods
on
the
URI
identifying
unique
resource,
like
http://website.com/artifacts/artifact1,
their
behaviour
is
altered,
as
described
in
Table
3.3.
Table
3.3:
Artifact
URI’s
HTTP
methods
[25]
GET
PUT
POST
DELETE
Retrieve
a
representation
of
the
addressed
member
of
the
collection,
expressed
in
an
appropriate
Internet
media
type.
Replace
or
create
the
addressed
number
of
the
collection.
Usually
not
used.
The
addressed
member
is
treated
as
a
collection
in
its
own
right
and
a
new
entry
is
created
in
it.
Delete
the
addressed
member
of
the
collection.
22. 3
Technical
background
16
3.3
Resource
Description
Framework
The
Resource
Description
Framework
(RDF)
is
a
framework
developed
by
the
World
Wide
Web
Consortium
(W3C)
which
is
used
to
describe
web
resources.
The
description
of
resources
is
called
a
resource
representation.
By
using
RDF,
the
data
can
be
exchanged
independently
of
their
original
format,
through
their
RDF
representations.
RDF
was
made
to
be
read
primarily
by
machines.
RDF
uses
a
construct
called
a
triple
to
describe
data.
The
triple
consists
of
a
subject,
a
predicate
and
an
object.
The
predicate
can
also
be
called
a
property.
A
set
of
triples
is
called
RDF
graph.
Figure
3.2:
RDF
triple
Graph
consists
of
nodes
and
arcs.
The
subjects
and
objects
are
represented
with
nodes
and
predicates
with
arcs.
The
most
appropriate
way
to
describe
this
is
through
the
next
example:
<New
York>
<is>
<big>.
<New
York>
<has>
<great
taxi
service>.
<Ivan>
<lives
in>
<New
York>.
<Ivan>
<is>
<a
taxi
driver>.
<Ivan>
<was
born
in>
<Croatia>.
One
subject
can
be
referenced
multiple
times.
By
combining
these
triplets
into
a
set
of
triplets,
we
get
a
graph
shown
in
Figure
3.3.
Figure
3.3:
Triples
graph
23. 3
Technical
background
17
3.3.1
SPARQL
SPARQL
is
a
query
language
which
can
be
used
on
graphs
like
the
one
shown
in
Figure
3.3.
It
is
used
to
query,
for
example,
where
Ivan
lives.
SPARQL
is
an
acronym
for
SPARQL
Protocol
and
RDF
Query
Language.
It
is
a
type
of
RDF
query
language
which
is
capable
to
retrieve
and
manipulate
data
stored
in
RDF
format.
3.3.2
IRI
IRI
is
an
abbreviation
for
International
Resource
Identifier.
It
is
a
generalization
of
the
uniform
resource
identifier
(URI).
Unlike
URIs,
IRIs
are
not
limited
to
a
subset
of
the
ASCII
character
set
but
can
also
contain
characters
from
the
Universal
Character
Set
(Unicode/
ISO
10646).
3.3.3
Literals
Literals
are
basic
values
that
are
not
IRIs.
They
can
only
appear
in
the
object
position
of
a
triple.
In
this
example
literals
are
“big”,
“Croatia”,
etc.
Literals
must
be
associated
with
their
data
types
which
are
defined
inside
the
RDF
specification.
3.3.4
Blank
nodes
A
blank
node
represents
a
resource
that
may
not
be
identified
by
a
global
identifier.
Blank
nodes
represent
resources
without
stating
their
value,
what
makes
them
similar
to
variables.
They
can
be
found
in
the
subject
and
object
position
of
a
triplet.
Figure
3.4:
Blank
node
24. 4
Open
Services
for
Lifecycle
Collaboration
18
4 Open
Services
for
Lifecycle
Collaboration
The
iFEST
integration
framework
consists
of
lifecycle
tools
whose
goal
is
to
make
the
product
development
process
quicker
and
less
expensive.
Before
describing
the
framework
in
details,
OSLC
concept
must
be
explained
as
it
presents
the
core
technology
used.
4.1
Introduction
to
OSLC
Open
Services
for
Lifecycle
Collaboration
(OSLC)
is
an
open
community
[22],
founded
in
2008,
whose
goal
is
the
creation
of
specification
for
integrating
lifecycle
tools.
These
specifications
allow
different
lifecycle
tools
to
share
their
data.
Examples
of
lifecycle
tools
in
software
development
include
defect
tracking
tools,
requirement
management
tools
and
test
management
tools.
The
OSLC
community
is
organised
in
workgroups
where
each
workgroup
represents
an
OSLC
domain.
An
OSLC
domain
describes
a
part
of
the
lifecycle,
like
configuration
management
(CM
specification),
requirement
management
(RM
specification)
or
quality
management
(QM
specification).
Each
domain
specifies
a
common
set
of
resources,
formats
and
services
that
can
be
used
by
tools
in
this
domain.
All
OSLC
domains
are
built
on
OSLC
core
specification
where
core
concepts
and
integration
patterns
are
described.
The
main
goal
of
each
workgroup
is
to
address
integration
scenarios,
define
the
common
vocabulary
for
individual
topics
and
to
create
open
and
public
specification
of
resources.
The
OSLC
is
a
rapidly
increasing
community
comprised
of
important
companies,
such
as
ABB,
IBM,
Ericsson,
General
Motors,
Siemens,
etc.,
which
work
on
designing
and
correcting
standards
and
specifications.
Created
standards
can
be
used
to
produce
tool
interoperability
and
traceability.
The
ending
goal
of
OSLC
is
to
improve
data
usage
and
process
of
integration.
The
OSLC
is
not
the
real
solution
to
all
integration
needs,
and
adopting
OSLC
in
the
framework
does
not
mean
that
one
cannot
use
other
technologies
that
are
not
supported
by
OSLC.
iFEST
intends
to
propose
extensions
to
OSLC
to
meet
the
needs
in
the
area
of
embedded
systems.
The
areas
of
such
extensions
include
specifications
for
embedded
system
domains
and
needs
for
handling
established
exchange
formats.
OSLC
does
not
cover
aspects
such
as
transformations,
transactions
or
handling
data
integration
at
different
levels
of
granularity
which
represents
the
main
lack
of
OSLC
and,
because
of
that,
deviation
from
certain
OSLC
specifications
may
be
necessary.
25. 4
Open
Services
for
Lifecycle
Collaboration
19
Figure
4.1:
OSLC
domains
[22]
4.2
Core
Concept
Key
concepts
[10]
for
tool
integration
provided
by
OSLC
are:
a
uniform
access
to
shared
resources,
a
common
vocabulary
and
formats
and
a
loose
coupling
between
tools,
established
through
REST
architecture.
In
OSLC,
all
the
information
is
exchanged
using
Web
services.
OSLC
Web
services
are
programmes
which
communicate
over
a
network
in
a
RESTful
way,
using
HTTP
as
a
protocol.
Data
is
usually
represented
in
RDF/XML
format.
OSLC
is
based
on
W3C
Linked
Data
in
which
separated
data
sets
are
linked
together.
Linked
Data
concept
is
created
for
publishing
structured
data.
It
rests
on
standard
Web
technologies
such
as
HTTP,
RDF
and
URIs,
but
instead
of
using
them
to
provide
web
pages
for
human
readers,
it
extends
them
to
share
information
in
a
computer
readable
way.
There
are
several
rules
provided
by
Linked
Data
concept:
1. Use
URIs
as
names
for
things
2. Use
HTTP
URIs
so
that
people
can
look
up
those
names
3. When
someone
looks
up
a
URI,
provide
useful
information
using
the
standards
(RDF,
SPARQL)
4. Include
links
to
other
URIs,
so
that
they
can
discover
more
things
In
OSLC,
data
is
called
an
OSLC
Resource
which
represents
for
instance
a
model
or
a
test
case.
Each
Resource
is
identified
by
a
unique
URI,
and
each
Resource
must
be
accessed
by
HTTP
commands:
GET,
PUT,
POST
using
Linked
Data
principle
described
above.
Also,
a
Resource
must
be
available
in
the
RDF
standard.
The
resource
storage
medium
is
unconstrained
by
OSLC,
it
could
be
stored
in
a
rational
database,
on
disk,
a
source
code
system
or
in
any
other
way.
When
26. 4
Open
Services
for
Lifecycle
Collaboration
20
describing
an
OSLC
Resource
type,
OSLC
Specifications
must
provide
information
listed
in
Table
4.1.
Table
4.1:
Basic
OSLC
properties
[11]
Property
Type
Description
Name
String
The
name
of
the
resource
which
must
be
valid
as
the
Local
Name
part.
URI
URI
The
Uri
of
the
resource
definition
formed
by
appending
the
Name
to
the
end
of
the
Namespace
URI
in
the
specification
that
defines
the
resource.
Likewise,
OSLC
Specifications
may
provide
a
list
of
allowed
or
required
properties
for
a
particular
domain.
For
each
defined
property,
specification
should
provide
information
listed
in
Table
4.2.
Any
defined
resource,
depending
on
the
needs,
may
have
many
more
user
defined
properties
in
practice.
Table
4.2:
Mandatory
OSLC
properties
[11]
Property
Type
Description
Name
String
The
name
of
the
property
which
must
be
valid.
URI
URI
The
URI
that
identifies
the
property.
Description
String
Description
of
the
property.
Value-‐
types
Literal
Value
can
be
Boolean,
Date
Time,
Decimal,
Double,
Float,
Integer,
String,
XMLLiteral.
Resource
Resource
(value
is
a
resource
at
specified
URI).
Local
Resource
(value
is
a
resource
available
only
inside
the
resource
being
defined).
AnyResource
(value
is
either
a
Resource
or
Local
Resource).
Representation
String
For
properties
with
a
resource
value-‐
type,
OSLC
specifications
should
specify
how
the
resource
will
be
represented.
Read-‐only
True
Providers
should
not
permit
clients
to
change
the
value
of
a
property.
False
Providers
may
permit
clients
to
change
the
value
of
a
property.
Unspecified
Indicates
that
domain
specification
leaves
the
choice
up
to
provider
implementations.
27. 4
Open
Services
for
Lifecycle
Collaboration
21
The
core
OSLC
concept
[11]
is
shown
in
the
Figure
below.
Figure
4.2:
OSLC
core
concept
[11]
Service
Provider
Catalogue
is
the
main
entity
in
the
concept.
It
lists
multiple
Service
Providers
which
are
represented
as
a
product
that
provides
implementation
of
one
or
more
OSLC
Services,
which
may
independently
implement
different
Domain
specifications.
So,
Service
Provider
describes
the
Services
it
offers
in
the
form
of
a
service
catalogue.
The
service
catalogue
functions
as
a
phonebook
for
services
where
OSLC
Service
is
represented
as
a
set
of
capabilities
that
enable
a
web
client
to
create,
retrieve,
update
and
delete
resources
managed
by
ALM
product.
The
catalogue
contains
all
service
contracts,
allowing
the
lookup
of
a
certain
service
provider
based
on
service
descriptions
it
provides.
Each
Service
can
provide
Creation
Factories
for
Resource
creation,
Query
Capabilities
for
resource
querying
and
Delegated
UI
Dialogs
in
order
for
clients
to
create
and
select
resources
via
(graphical)
UI.
Query
Capabilities
and
Creation
Factories
may
offer
Resource
Shapes
that
describe
resource
properties
managed
by
the
Service.
Resource
Shapes
are
essentially
the
meta-‐models
for
Resources
and
they
are
not
mandatory.
4.3
Basic
OSLC
integration
techniques
OSLC
specification
presents
two
basic
techniques
for
integrating
tools
[10].
Both
of
these
techniques
are
built
on
the
HTTP
and
RDF
standard.
The
first
technique
is
only
applicable
when
HTML
UI
is
not
present
and
the
other
technique
permits
the
integration
to
exploit
existing
UI.
28. 4
Open
Services
for
Lifecycle
Collaboration
22
4.3.1
Linking
data
via
HTTP
OSLC
defines
a
general
tool
protocol
for
creating,
retrieving,
updating
and
deleting
lifecycle
data
based
on
internet
standards,
such
as
HTTP
and
RDF,
using
the
Linked
data
concept.
To
establish
connection
between
data
of
one
tool
with
the
other,
HTTP
URI
of
one
resource
is
embedded
in
the
representation
of
another.
4.3.2
Linking
Data
via
HTML
User
Interface
In
“Linking
Data
via
HTML
Interface“
technique
the
URL
of
the
client’s
UI
is
embedded
in
the
representation
of
another
client,
instead
of
a
principle
where
one
tool
integrates
with
the
other
by
linking
resources
to
each
other.
OSLC
allows
the
user
to
link
the
resource
or
see
information
about
resource
in
another
tool.
This
enables
the
tool
to
exploit
existing
UI
and
business
logic
in
other
tools.
In
some
cases,
this
principle
is
more
effective
and
suggests
more
user
function
than
implementing
a
new
UI
and
integrating
via
CRUD
methods.
29. 5
The
iFEST
Tool
Integration
Framework
23
5 The
iFEST
Tool
Integration
Framework
The
iFEST
integration
framework
is
a
general
tool
integration
framework
for
hardware-‐software
co-‐design
of
heterogeneous
and
multi-‐code
embedded
systems.
It
allows
tools
to
be
readily
placed
within
the
tool
chain.
Furthermore,
it
enables
tools
to
exchange
data
automatically,
eliminating
the
need
for
engineers
to
make
the
transition
from
the
tool
in
one
lifecycle
phase
to
the
tool
in
another.
To
validate
the
integration
framework,
industrial
case
studies
based
on
tools
with
V-‐
model
are
used.
The
V-‐model
[23]
is
a
software
development
process
which
can
be
represented
as
an
extension
of
the
waterfall
model.
It
has
two
separate,
converging
“streams”
of
lifecycle
activities
or
phases,
together
constituting
the
resemblance
of
a
letter
“V”.
On
the
left
side,
activities
related
to
development
are
shown
and,
on
the
right
side,
activities
concerning
verification
and
validation
of
the
activities
from
the
left
side
are
shown.
As
we
move
from
the
upper
part
of
the
model
to
the
bottom,
we
move
from
activities
dealing
with
artefacts
on
a
higher
meta
level,
down
to
tools
dealing
with
more
specific
system
details.
Figure
5.1:
V-‐model
[23]
As
we
can
see
in
the
picture
above,
the
activities
on
the
left
side
descending
down
the
V
become
more
detailed
as
we
get
lower
to
finally
end
up
in
the
implementation
activity
in
which
30. 5
The
iFEST
Tool
Integration
Framework
24
the
system
is
fully
developed.
We
can
also
notice
dashed
arrows
which
show
us
that
every
activity
on
the
left
side
has
a
counterpart
on
the
right
side
that
validates
and
verifies
the
activity.
The
left
side
consists
of
two
major
parts:
Requirement
Engineering
&
Analysis
(RE&A)
and
Design
&
Implementation
(D&I).
In
RE&A
phase,
requirements
are
defined
and
a
blueprint,
which
ought
to
function
as
the
basis
of
the
implementation,
is
provided.
D&I
is
a
set
of
phases:
Architecture,
Detailed
Design
and
Implementation.
In
this
phase,
we
define
and
develop
the
components
that
implement
the
requirements
from
the
previous
phase.
The
right-‐hand
side
consists
of
Verification
&
Validation
(V&V)
activities.
It
consists
of
Unit
Testing,
Integration
Testing
and
System
and
Acceptance
Testing
which
ensure
components
are
of
an
adequate
quality
and
behave
as
expected.
The
maintenance
activities
are
not
considered
within
iFEST.
The
centre
of
iFEST
IF
represents
the
traceability
between
the
artefacts
in
different
phases.
The
dashed
arrows
in
the
picture
above
symbolise
the
traceability
relationship
between
the
development
activities
and
tests.
The
goal
of
iFEST
IF
is
to
show
benefits
for
tools
in
all
three
categories
of
lifecycle
phases.
In
RE&A,
it
will
show
the
link
between
the
requirements
and
actual
models
and
simulations.
Requirements
can
easily
be
traced
through
all
upcoming
phases.
In
D&I
phase,
iFEST
IF
should
show
the
connection
between
the
high
level
design
of
the
system
and
its
actual
implementation
components.
As
for
V&V,
verification
becomes
available
earlier
in
the
lifecycle
and
flaws
can
be
detected
more
quickly
what,
furthermore,
leads
to
reduced
development
time
and
cost.
Every
tool
functioning
as
a
part
of
the
iFEST
framework
belongs
to
one
of
the
three
categories.
Every
category
defines
process
patterns,
transformations
and
metamodels
for
tools.
5.1
iFEST
Architecture
The
Framework
consists
of
tool
instances,
tool
adaptors,
IF
repository
and
the
Orchestrator
[14].
Every
instance
communicates
with
the
rest
of
the
framework
using
its
adaptor.
The
adaptor
converts
all
data
to
common
format
so
that
is
understandable
to
all
tool
adaptors
in
the
framework.
The
IF
Repository
is
used
to
store
all
the
relevant
data.
It
contains
different
versions
of
published
models
which
are
stored
in
corresponding
folders.
The
last
component
in
the
framework,
The
Orchestrator,
is
used
to
provide
general
services
to
all
tools
in
the
framework.
For
instance,
tool
adaptors
can
register
themselves
with
the
Orchestrator
in
order
to
receive
notifications
on
newly
published
IF
versions
of
models
from
tools
they
are
interested
in.
31. 5
The
iFEST
Tool
Integration
Framework
25
Figure
5.2:
iFEST
Tool
Integration
Framework
architecture
[14]
The
IF
repository
is
the
main
storage
of
the
framework.
Every
new
version
of
data
is
stored
separately
for
every
tool,
in
the
tool’s
unique
folder.
However,
most
of
the
new
versions
do
not
include
major
changes
so
there
is
no
need
to
publish
every
one
of
them.
It
is
only
when
the
tool
user
publishes
them
that
it
is
created
in
the
repository.
For
that
reason,
every
tool
has
its
local
versioning
system
where
it
keeps
all
the
versions
made.
5.2
iFEST
concept
The
essential
concept
of
iFEST
framework
is
shown
in
Figure
5.3.
Tree
fundamental
components
are
iFEST
Integration
Framework,
Tool
Chain
and
Integration
Platform
[14].
The
framework
consists
of
principles,
specifications,
guidelines
and
technological
space
representing
technology
on
which
tool
implementations
will
rely.
iFEST
framework
is
based
on
OSLC
technology,
described
in
the
previous
chapter.
Various
tools
with
their
adaptors,
which
represent
tool
interfaces,
can
be
integrated
in
Tool
Chain
and
Integration
Platform.
32. 5
The
iFEST
Tool
Integration
Framework
26
The
framework
consists
of
the
following
main
building
blocks:
• Integrated
tool
–
Tool
with
its
adaptor.
It
can
be
integrated
into
a
Tool
Chain
and/or
Tool
Platform.
The
adaptor
is
implemented
to
be
compliant
with
iFEST
Integration
Framework.
• Tool
Adaptor
–
A
piece
of
software
which
is
actually
a
tool
interface
towards
other
tools
in
the
framework.
It
provides
and
also
consumes
both
services
and
data.
The
services
and
data
are
standardised
in
the
Adaptor
Specification.
Figure
5.3:
iFEST
fundamental
concept
[14]
iFEST
integration
framework
consists
of
the
following
components:
• The
Technological
Space
onto
which
tool
integration
implementations
rely,
e.g.
communication
protocols,
data
exchange
formats,
etc.
• Adaptor
Specifications
specify
the
data
to
be
manipulated
by
Tool
Chain
and
services
provided
and
requested
by
the
tools
of
Tool
Chain.
Adaptor
Specifications
rely
on
the
set
of
data
and
services
defined
in
OSLC.
• Guidelines
support
specifications
for
Tool
Adaptors
and
their
implementation
according
to
specifications.
The
guidelines
are
useful
for
developers
who
plan
to
provide
a
Tool
Adaptor
for
their
tool
or
use
existing
services
provided
by
iFEST
Tool
Adaptors
to
create
value
added
services.
• Technical
and
non-‐technical
Principles
define
rules
for
specifications
and
implementations.
• Tool
Chain
is
a
set
of
Integrated
Tools
which
form
development
environment
in
order
to
facilitate
its
product
development
and
maintenance.
33. 5
The
iFEST
Tool
Integration
Framework
27
• Integration
Platform
is
a
package
of
Integrated
Tools
including
their
adaptors
that
can
facilitate
the
development
of
Tool
Chain.
It
also
provides
tooling
support
such
as
SDKs
and
adaptor
generators,
as
well
as
tools
to
help
deploy,
configure
and
maintain
Tool
Chains.
Tools
provided
by
the
Integration
Platform
are
usually
generic
and
can
be
of
use
across
many
Tool
Chains.
Considering
the
fact
that
the
Integration
Platform
consists
of
Tools
and
their
Tool
Adaptors,
the
question
is
what
the
difference
between
the
Integration
Platform
and
Tool
Chains
is.
The
difference
can
be
best
explained
considering
the
following
two
concepts.
Industrial
organisation,
focused
on
product
development,
develops
Tool
Chains
and
their
specific
Tool
Adaptors,
and
uses
tools
as
well.
An
Integration
Platform
vendor,
focused
on
tool
and
tool
integration,
develops
a
combination
of
Integration
Platforms,
tool
adaptors
and
tooling
support
in
order
to
aid
the
development
of
Tool
Chains.
It
has
a
broader
knowledge
of
a
specific
domain
and
can
offer
reusable
Integration
Platform
to
many
organisations.
A
good
integration
framework
needs
to
support
both
the
developers
of
Tool
Chains
and
the
developers
of
Integration
Platforms.
34. 6
Orchestrator
28
6 Orchestrator
The
aim
of
the
iFEST
project
is
to
define
and
realise
a
tool
integration
framework
for
hardware-‐
software
co-‐design
of
heterogeneous
and
multi-‐core
embedded
systems.
To
enable
the
communication
between
tools,
tool
adaptors
are
developed.
The
adaptor
represents
an
interface
between
the
tools
and
the
framework.
By
having
adaptors
which
follow
the
same
standard,
tools
can
exchange
data
regardless
of
the
format
data
was
originally
in.
The
iFEST
integration
framework
made
the
process
of
product
development
much
quicker
and
less
expensive.
By
developing
a
new
component
which
would
manage
the
communication
between
the
tools
in
the
framework,
the
time
framework
of
its
arrival
to
the
market
would
be
even
shorter.
This
Section
explains
that
component,
named
the
Orchestrator.
Its
architecture
with
core
services
is
described,
as
well
as
its
detailed
implementation
and
some
of
the
possible
scenarios
of
usage,
followed
by
sequence
diagrams.
6.1
Architecture
The
Orchestrator
is
a
tool
that
manages
other
tools
that
are
involved
in
the
system,
and
their
respective
services
offered
to
the
integration
network.
It
registers
the
tools
along
with
their
services
and
manages
tool2service
subscriptions.
The
Orchestrator
receives
messages
from
tools
and
forwards
them
to
the
respective
registered
tools.
It
also
stores
links
related
to
traceability
and
history,
with
information
on
tools,
files,
versions,
etc.
The
components
which
compose
the
Orchestrator
are
shown
in
Figure
6.1.
The
Orchestrator
uses
a
repository
to
store
all
the
relevant
data
that
can
be
related
to
the
tool
or
the
Orchestrator
itself.
The
tool
interoperability
across
the
IF
is
based
on
a
service-‐oriented
approach,
where
the
Orchestrator
and
tools
interact
via
services,
by
“providing”
and
“consuming”
them.
In
this
context,
services
are
considered
as
means
to
interact
and
exchange
data
between
the
Orchestrator
and
Tools
operating
in
the
integration
network.
35. 6
Orchestrator
29
Figure
6.1:
Orchestrator
architecture
6.1.1
OSLC
Core
specification
The
OSLC
Core
specification
is
developed
by
the
Core
workgroup
to
define
the
basic
integration
technologies
for
integrating
lifecycle
tools.
As
it
was
stated
before,
OSLC
specifies
resources
using
URIs,
which
can
be
customised
using
HTTP
methods.
The
Core
specification
in
combination
with
one
or
more
domain
specifications
defines
OSLC
protocols
suggested
by
a
domain
tool.
OSLC
domain
specification
defines
additional
resource
types,
but
does
not
define
new
protocol.
In
this
thesis,
OSLC
Core
Specification
Version
2.0
and
Change
Management
Specification
Version
2.0
are
used.
6.1.2
Service
catalogue
The
Orchestrator
and
tools
interact
via
consuming
and
providing
services
from
each
other.
All
the
available
services
offered
by
the
tools
in
the
framework
are
listed
in
the
Service
catalogue
which
is
managed
by
the
Orchestrator.
Tools
can
browse
through
the
service
catalogue
and
subscribe
to
services
they
want
to.
The
example
of
a
catalogue
entry
is
given
in
Table
6.1.
Table
6.1:
Service
catalogue
entries
Catalogue
Entry
Tool
A
Tool
B
Service
http://localhost:8080/ToolA/1/
http://localhost:8585/ToolB/1/
Title
Tool
A_service1
Tool
B_service1
Description
This
is
a
service
of
tool
A.
This
is
a
service
of
tool
B.