TeamStation AI System Report LATAM IT Salaries 2024
Understanding Ontologies with LODE
1. Galway, Irland
2012
The Live OWL Documentation Environment:
a tool for the automatic generation
of ontology documentation
Silvio Peroni – essepuntato@cs.unibo.it
David Shotton – david.shotton@zoo.ox.ac.uk
Fabio Vitali – fabio@cs.unibo.it
http://creativecommons.org/licenses/by-sa/3.0
2. Outline
• Context: ontology understanding
• Tools to improve ontology understanding through ontology
documentation
• LODE, the Live OWL Documentation Environment
• User testing session: assessing LODE usability
• Conclusions
3. Does Semantic Web need interfaces?
“ After 10+ years of work into remaining challenges toSemanticthe Semantic Web vision [...] come downam now
fully convinced that most of the
various aspects of the
realize
Web and its constituent technologies, I
to user
interfaces and usability. Somehow, I repeatedly run into a situation where some use of Semantic Web
technologies that would make a nice end-user application is ‘blocked’ by the fact that the user
interface is the real challenge. ” Ora Lassila, 2007
http://www.lassila.org/blog/archive/2007/03/semantic_web_so_1.html
“ We do not yet have true.ThatWeb technology available which is that at MIT.We have aby grandparents
and children. That is
Semantic
is something which we are developing
easily usable
team working exactly
on that, making programs to allow people, normal people, to read and write and process their
data. ” Tim Berners-Lee, 2007
http://www.itworld.com/070709future?page=0,3
“ able there are lots of different ways exploring the world ofto beYou need to be able data.You need to
be
[...]
to browse through it piece by piece,
that people need
data.
able to look at
to look for patterns
of particular things that have happened. Because this is data, we need to be able to use all of the power that
traditionally we've used for data. When I've pulled in my chosen data set, using a query, I want to be able to do
[things like] maps, graphs, analysis, and statistical stuff. So when you talk about user interfaces for this,
it's really very very broad. Yes I think it's important. ”
http://www.readwriteweb.com/archives/readwriteweb_interview_with_tim_berners-lee_part_2.php
Tim Berners-Lee, 2009
4. Does Semantic Web need interfaces?
“ After 10+ years of work into remaining challenges toSemanticthe Semantic Web vision [...] come downam now
fully convinced that most of the
various aspects of the
realize
Web and its constituent technologies, I
to user
interfaces and usability. Somehow, I repeatedly run into a situation where some use of Semantic Web
technologies that would make a nice end-user application is ‘blocked’ by the fact that the user
interface is the real challenge. ” Ora Lassila, 2007
http://www.lassila.org/blog/archive/2007/03/semantic_web_so_1.html
“ We do not yet have true.ThatWeb technology available which is that at MIT.We have aby grandparents
and children. That is
Semantic
is something which we are developing
easily usable
team working exactly
on that, making programs to allow people, normal people, to read and write and process their
data. ” Tim Berners-Lee, 2007
http://www.itworld.com/070709future?page=0,3
“ able there are lots of different ways exploring the world ofto beYou need to be able data.You need to
be
[...]
to browse through it piece by piece,
that people need
data.
able to look at
to look for patterns
of particular things that have happened. Because this is data, we need to be able to use all of the power that
traditionally we've used for data. When I've pulled in my chosen data set, using a query, I want to be able to do
[things like] maps, graphs, analysis, and statistical stuff. So when you talk about user interfaces for this,
it's really very very broad. Yes I think it's important. ”
http://www.readwriteweb.com/archives/readwriteweb_interview_with_tim_berners-lee_part_2.php
Tim Berners-Lee, 2009
Any strategy that guarantees the broad adoption of Semantic Web
technologies must address the need for improved human interaction
with semantic models and data
5. Current interfaces
• A lot of work has already been done in this direction
✦ ontology development editors
✦ ontology visualisation and navigation tools
✦ Semantic Web search engines
✦ semantic desktop applications
✦ authoring tools for semantic data
✦ etc.
6. Current interfaces
• A lot of work has already been done in this direction
✦ ontology development editors
✦ ontology visualisation and navigation tools
✦ Semantic Web search engines
✦ semantic desktop applications
✦ authoring tools for semantic data
✦ etc.
7. Interaction with ontologies
• Semantic Web is a multi-disciplinary field that interests non-CS and
domain-centric communities – involving different kinds of people, such as
publishers, archivists, legal experts, sociologists, philosophers – that may
have just low (or none at all) knowledge on Semantic Web technologies
• Issue: what is still missing are tools that really assist people who are
not expert in semantic technologies in dealing with and publishing
semantic data
✦ How does a publisher (who may not be expert in semantic technologies) choose/
develop/use a suitable ontology for describing the publishing domain?
• Human interactions with ontologies usually involves the following steps:
✦ people need to understand existing models with the minimum amount of effort
✦ then, if the existing vocabularies/ontologies are not able to fully describe the domain in
consideration, people develop new models
✦ people have to add data according to adopted or developed models and to modify
those data in the future
8. Interaction with ontologies
• Semantic Web is a multi-disciplinary field that interests non-CS and
domain-centric communities – involving different kinds of people, such as
publishers, archivists, legal experts, sociologists, philosophers – that may
have just low (or none at all) knowledge on Semantic Web technologies
• Issue: what is still missing are tools that really assist people who are
not expert in semantic technologies in dealing with and publishing
semantic data
✦ How does a publisher (who may not be expert in semantic technologies) choose/
develop/use a suitable ontology for describing the publishing domain?
• Human interactions with ontologies usually involves the following steps:
✦
Today’s topic
people need to understand existing models with the minimum amount of effort
✦ then, if the existing vocabularies/ontologies are not able to fully describe the domain in
consideration, people develop new models
People need to understand existing models
people have to add data according to adopted or developed models and to modify
with the minimum amount of effort
✦
those data in the future
9. Ontology documentation
• Usually, the first activity performed when someone wants to understand the
extent of a particular ontology is to consult its human-readable
documentation
• A large number of ontologies, especially those used in the Linked Data world,
have good comprehensive Web pages describing their theoretical
backgrounds and the features of their developed entities
• Problems arise when we look at under-developed models, since natural
language documentation is usually only published once an ontology has
become stable
✦ This approach is justifiable: writing proper documentation costs effort, and re-writing it
every time the developing ontology is modified is not practical
• In absence of documentation, a way of getting a sense of existing ontologies
was to open them in an ontology editor so as to explore their axioms
✦ This could be challenging and time-consuming, presenting a barrier that is too great for the
majority of non-specialists
10. Automatic production of documentation
• Another way is to automatically create a first draft of such a
documentation starting from:
✦ labels (i.e. rdfs:label)
✦ comments (i.e. rdfs:comment)
✦ other kinds of annotations (e.g. dc:description, dc:creator, dc:date)
✦ the logical structure of the ontology itself
• Applications have been already developed for this purpose, e.g.:
✦ Neologism – http://neologism.deri.ie
✦ OWLDoc – http://code.google.com/p/co-ode-owl-plugins/wiki/OWLDoc
✦ Paget – http://code.google.com/p/paget
✦ Parrot – http://ontorule-project.eu/parrot/parrot
✦ SpecGen – http://forge.morfeo-project.org/wiki_en/index.php/SpecGen
✦ VocDoc – http://kantenwerk.org/vocdoc/
• We developed a new application for the same purpose
11. Making ontology
documentation with
• The Live OWL Documentation Environment (LODE)
is a novel online service that automatically generates a human-
readable description of an OWL ontology (or, more generally, an
RDF vocabulary), taking into account:
✦ annotations
✦ classes
✦ object properties
✦ data properties
✦ named individuals
✦ annotation properties
✦ meta-modelling (punning)
✦ general axioms
✦ SWRL rules
✦ namespace declarations
• It orders ontological entities with the appearance and
functionality of a W3C Recommendation document by use CSS,
returning a human-readable HTML page designed for easy
browsing and navigation by means of embedded links
12. Making ontology
documentation with
• The Live OWL Documentation Environment (LODE)
is a novel online service that automatically generates a human-
readable description of an OWL ontology (or, more generally, an
RDF vocabulary), taking into account: XS LT-b ased te
ns:
annotations rizatio chnolo
✦
t linea gy
✦ classes e di fferen
Handl ML
Open source
✦ object properties RDF/X
x
✦ data properties Turtle ester Synta
Manch ML in
✦ named individuals OWL
/X Documentation
es
✦ annotation properties different languag
✦ meta-modelling (punning)
✦ general axioms Freely available online at
✦ SWRL rules http://www.essepuntato.it/lode
✦ namespace declarations
• It orders ontological entities with the appearance and
functionality of a W3C Recommendation document by use CSS,
returning a human-readable HTML page designed for easy
browsing and navigation by means of embedded links
16. An example
XSLT processor
E.g.: http://www.essepuntato.it/lode/http://www.essepuntato.it/2012/04/tvc
17. An example
XSLT processor
E.g.: http://www.essepuntato.it/lode/http://www.essepuntato.it/2012/04/tvc
...
18. An example
XSLT processor
E.g.: http://www.essepuntato.it/lode/http://www.essepuntato.it/2012/04/tvc
Information about
the ontology
...
19. An example
XSLT processor
E.g.: http://www.essepuntato.it/lode/http://www.essepuntato.it/2012/04/tvc
Information about
the ontology
General ToC
...
20. An example
XSLT processor
E.g.: http://www.essepuntato.it/lode/http://www.essepuntato.it/2012/04/tvc
...
Information about
the ontology
General ToC
...
...
21. An example
XSLT processor
E.g.: http://www.essepuntato.it/lode/http://www.essepuntato.it/2012/04/tvc
...
Information about Using images in
the ontology descriptions
General ToC
...
...
22. An example
XSLT processor
E.g.: http://www.essepuntato.it/lode/http://www.essepuntato.it/2012/04/tvc
...
Information about Using images in
the ontology descriptions
Local ToC
General ToC
...
...
23. An example
XSLT processor
E.g.: http://www.essepuntato.it/lode/http://www.essepuntato.it/2012/04/tvc
...
Information about Using images in
the ontology descriptions
Local ToC
General ToC
Axioms
rendered
in Manchester
...
... Syntax
24. An example
XSLT processor
E.g.: http://www.essepuntato.it/lode/http://www.essepuntato.it/2012/04/tvc
...
...
Information about Using images in
the ontology descriptions
Local ToC
...
General ToC
Axioms
rendered
in Manchester
...
... Syntax
25. An example
XSLT processor
E.g.: http://www.essepuntato.it/lode/http://www.essepuntato.it/2012/04/tvc
...
... Another local ToC
Information about Using images in
the ontology descriptions
Local ToC
...
General ToC
Axioms
rendered
in Manchester
...
... Syntax
26. An example
XSLT processor
E.g.: http://www.essepuntato.it/lode/http://www.essepuntato.it/2012/04/tvc
...
... Another local ToC
Information about Using images in
the ontology descriptions
Local ToC
...
General ToC
Axioms
rendered
in Manchester
...
... Syntax
...
27. An example
XSLT processor
E.g.: http://www.essepuntato.it/lode/http://www.essepuntato.it/2012/04/tvc
...
... Another local ToC
Information about Using images in
the ontology descriptions
Local ToC
...
General ToC
Axioms
rendered List of all SWRL rules
in Manchester
...
... Syntax
...
29. How to call the service
http://www.essepuntato.it/lode
URL to call the service
30. How to call the service
http://www.essepuntato.it/lode /optional-parameters
URL to call the service slash-separated
parameters
31. How to call the service
http://www.essepuntato.it/lode /optional-parameters /ontology-url
URL to call the service slash-separated full “http://...”
parameters ontology URL
32. How to call the service
http://www.essepuntato.it/lode /optional-parameters /ontology-url
URL to call the service slash-separated full “http://...”
parameters ontology URL
• Optional parameters:
✦ owlapi – ontology-url is loaded through the OWLAPI, stored as an RDF/XML string that is finally
transformed in an HTML file by the XSLT processor. It allows the generation of documentation of
ontologies stored in Turtle, Manchester Syntax, OWL/XML.
✦ imported – the axioms in the imported ontologies are added to the HTML documentation of
ontology-url.
✦ closure – all the axioms in the transitive closure of ontology-url are added to its HTML documentation.
✦ reasoner – the inferred axioms of ontology-url (through the Pellet reasoner) will be added to its HTML
documentation.
✦ lang=XX – the selected language “XX” (e.g. “it”, “fr”, “de”) will be used as the preferred language
instead of English when showing the documentation of ontology-url.
• ✦
Examples:
http://www.essepuntato.it/lode/owlapi/http://xmlns.com/foaf/spec/index.rdf
✦ http://www.essepuntato.it/lode/imported/http://purl.org/spar/fabio
✦ http://www.essepuntato.it/lode/lang=it/http://www.essepuntato.it/2011/02/argumentmodel
33. How to call the service
http://www.essepuntato.it/lode /optional-parameters /ontology-url
URL to call the service slash-separated full “http://...”
parameters ontology URL
•
alw
re ays ✦
Optional parameters:
co st owlapi – ontology-url is loaded through the OWLAPI, stored as an RDF/XML string that is finally
mm ro
en ngly transformed in an HTML file by the XSLT processor. It allows the generation of documentation of
de ontologies stored in Turtle, Manchester Syntax, OWL/XML.
d
✦ imported – the axioms in the imported ontologies are added to the HTML documentation of
ontology-url.
✦ closure – all the axioms in the transitive closure of ontology-url are added to its HTML documentation.
✦ reasoner – the inferred axioms of ontology-url (through the Pellet reasoner) will be added to its HTML
documentation.
✦ lang=XX – the selected language “XX” (e.g. “it”, “fr”, “de”) will be used as the preferred language
instead of English when showing the documentation of ontology-url.
• ✦
Examples:
http://www.essepuntato.it/lode/owlapi/http://xmlns.com/foaf/spec/index.rdf
✦ http://www.essepuntato.it/lode/imported/http://purl.org/spar/fabio
✦ http://www.essepuntato.it/lode/lang=it/http://www.essepuntato.it/2011/02/argumentmodel
34. How to call the service
http://www.essepuntato.it/lode /optional-parameters /ontology-url
URL to call the service slash-separated full “http://...”
parameters ontology URL
•
alw
re ays ✦
Optional parameters:
co st owlapi – ontology-url is loaded through the OWLAPI, stored as an RDF/XML string that is finally
mm ro
en ngly transformed in an HTML file by the XSLT processor. It allows the generation of documentation of
de ontologies stored in Turtle, Manchester Syntax, OWL/XML.
d
✦ imported – the axioms in the imported ontologies are added to the HTML documentation of
ontology-url.
✦ closure – all the axioms in the transitive closure of ontology-url are added to its HTML documentation.
c
ve an b ✦ reasoner – the inferred axioms of ontology-url (through the Pellet reasoner) will be added to its HTML
co ry t e
ns im documentation.
um e
ing
✦ lang=XX – the selected language “XX” (e.g. “it”, “fr”, “de”) will be used as the preferred language
instead of English when showing the documentation of ontology-url.
• ✦
Examples:
http://www.essepuntato.it/lode/owlapi/http://xmlns.com/foaf/spec/index.rdf
✦ http://www.essepuntato.it/lode/imported/http://purl.org/spar/fabio
✦ http://www.essepuntato.it/lode/lang=it/http://www.essepuntato.it/2011/02/argumentmodel
37. Latest features
Web GUI
takes as input both online and local ontologies
if you do not
check this,
LODE provides
by default
38. Latest features
Web GUI
takes as input both online and local ontologies
if you do not
check this,
LODE provides
by default
XSLT structural reasoner
this has
been
inferred by
this was
LODE
asserted in
the ontology
40. User testing session
We ask 13
subjects SW
experts to
complete 5 tasks
Semantic Web
expert
41. User testing session
There were no
We ask 13 “administrators”
subjects SW observing the
experts to subjects while they
complete 5 tasks were undertaking
Semantic Web these tasks
expert Administrator
42. User testing session
There were no We used a medium-size
We ask 13 “administrators” ontology, namely FaBiO
subjects SW observing the (http://purl.org/spar/fabio):
experts to subjects while they 214 classes, 69 object
complete 5 tasks were undertaking properties, 45 data
Semantic Web these tasks properties and 15 individuals
expert Administrator
43. User testing session
There were no We used a medium-size
We ask 13 “administrators” ontology, namely FaBiO
subjects SW observing the (http://purl.org/spar/fabio):
experts to subjects while they 214 classes, 69 object
complete 5 tasks were undertaking properties, 45 data
Semantic Web these tasks properties and 15 individuals
expert Administrator
1. We first asked subjects to complete a questionnaire about their background knowledge in OWL, ontology
engineering and ontology documentation [max. 2 mins.]
44. User testing session
There were no We used a medium-size
We ask 13 “administrators” ontology, namely FaBiO
subjects SW observing the (http://purl.org/spar/fabio):
experts to subjects while they 214 classes, 69 object
complete 5 tasks were undertaking properties, 45 data
Semantic Web these tasks properties and 15 individuals
expert Administrator
1. We first asked subjects to complete a questionnaire about their background knowledge in OWL, ontology
engineering and ontology documentation [max. 2 mins.]
2. As a warm-up task, we asked subjects to use LODE to explore the FOAF ontology in order to become
familiar with the structure of the documentation it produced [max. 5 mins.]
45. User testing session
There were no We used a medium-size
We ask 13 “administrators” ontology, namely FaBiO
subjects SW observing the (http://purl.org/spar/fabio):
experts to subjects while they 214 classes, 69 object
complete 5 tasks were undertaking properties, 45 data
Semantic Web these tasks properties and 15 individuals
expert Administrator
1. We first asked subjects to complete a questionnaire about their background knowledge in OWL, ontology
engineering and ontology documentation [max. 2 mins.]
2. As a warm-up task, we asked subjects to use LODE to explore the FOAF ontology in order to become
familiar with the structure of the documentation it produced [max. 5 mins.]
3. As the real test, we asked subjects to complete five tasks using the documentation of the FaBiO ontology
[max. 5 mins. per task]
Task 1 Describe the main aim of the ontology
Task 2 Describe what the class doctoral thesis defines Each subject used his/
her own computer,
Describe what the object property has subject term describes, and
Task 3 without caring to take
record its domain and range classes
note of the time spent
Task 4 Record the class having the largest number of direct individuals to accomplish each task
Task 5 Record all the subclasses and properties involving the class fabio:Item
46. User testing session
There were no We used a medium-size
We ask 13 “administrators” ontology, namely FaBiO
subjects SW observing the (http://purl.org/spar/fabio):
experts to subjects while they 214 classes, 69 object
complete 5 tasks were undertaking properties, 45 data
Semantic Web these tasks properties and 15 individuals
expert Administrator
1. We first asked subjects to complete a questionnaire about their background knowledge in OWL, ontology
engineering and ontology documentation [max. 2 mins.]
2. As a warm-up task, we asked subjects to use LODE to explore the FOAF ontology in order to become
familiar with the structure of the documentation it produced [max. 5 mins.]
3. As the real test, we asked subjects to complete five tasks using the documentation of the FaBiO ontology
[max. 5 mins. per task]
4. Finally, we asked subjects to fill in two questionnaires (SUS questionnaire and a textual questionnaire) to
report their experience of using LODE to complete these tasks [max. 5 mins.]
Task 1 Describe the main aim of the ontology
Task 2 Describe what the class doctoral thesis defines Each subject used his/
her own computer,
Describe what the object property has subject term describes, and
Task 3 without caring to take
record its domain and range classes
note of the time spent
Task 4 Record the class having the largest number of direct individuals to accomplish each task
Task 5 Record all the subclasses and properties involving the class fabio:Item
47. Results
• 65 tasks in total (5 x 13 subjects)
• 58 were completed successfully, giving an overall success rate of 89%, distributed as
follows: 13 (out of 13) in Task 1, 13 in Task 2, 13 in Task 3, 10 in Task 4 and 9 in Task 5
Measure Mean Max. value Min. value S. d. the mean SUS score for LODE was
77.7 (in a 0 to 100 range), abundantly
SUS value 77.7 92.5 57.5 12.5 surpassing the target score of 68 to
subscores
Usability 76.4 90.6 56.3 12.8 demonstrate a good level of usability
SUS
Sauro, J. (2011). A Practical Guide to the System
Learnability 82.7 100 62.5 14.9 Usability Scale: Background, Benchmarks & Best
Practices. ISBN: 978-1461062707
• Axial coding of the personal comments expressed in the final textual questionnaires
revealed a small number of widely perceived issues
✦ Search (-): no search function was provided to directly look for and access entities of the ontology.
Users acknowledge that since the ontology is on a single web page, they could use (and in fact did use)
the search function of the browser, but many still found it a missing feature.
✦ Readability (+): high praise was given to the clarity of the presentation, the intuitiveness of the
organisation, and the immediacy of identifying the sought information. The good typographical style of
the output is clearly among the best qualities of LODE.
✦ Links within the document (+): the systematic use of internal links to the various features of the
ontology was considered useful and immediately usable.
49. Conclusions (?)
“ No comparison was carried out. Thus, although we know the tool is usable, the
evaluation does not say whether the tool outperforms the state of the art. ”
First anonymous reviewer
50. Conclusions (?)
“ No comparison was carried out. Thus, although we know the tool is usable, the
evaluation does not say whether the tool outperforms the state of the art. ”
First anonymous reviewer
“ It would be good to add qualitative comparison between LODE and other tools.” reviewer
Second anonymous
51. Conclusions (?)
“ No comparison was carried out. Thus, although we know the tool is usable, the
evaluation does not say whether the tool outperforms the state of the art. ”
First anonymous reviewer
“ It would be good to add qualitative comparison between LODE and other tools.” reviewer
Second anonymous
53. User testing session – reprise
18 subjects with
different
expertise in
Semantic Web
technologies Semantic Web Philosopher CS Researcher Ph.D. student Company
expert employee
54. User testing session – reprise
18 subjects with
different
expertise in
Semantic Web
technologies Semantic Web Philosopher CS Researcher Ph.D. student Company
expert employee
http://ontorule-project.eu/parrot
The subjects were split in three
balanced groups of six, one group
for each tool to be tested
http://code.google.com/p/ontology-browser/
55. User testing session – reprise
18 subjects with
different
expertise in
Semantic Web
technologies Semantic Web Philosopher CS Researcher Ph.D. student Company
expert employee
http://ontorule-project.eu/parrot
The subjects were split in three
balanced groups of six, one group
for each tool to be tested
http://code.google.com/p/ontology-browser/
• The test session was structured as before
56. User testing session – reprise
18 subjects with
different
expertise in
Semantic Web
technologies Semantic Web Philosopher CS Researcher Ph.D. student Company
expert employee
http://ontorule-project.eu/parrot
The subjects were split in three
balanced groups of six, one group
for each tool to be tested
http://code.google.com/p/ontology-browser/
• The test session was structured as before
• We still used FOAF for the warm up task and
FaBiO for the proper experiment
57. User testing session – reprise
18 subjects with
different
expertise in
Semantic Web
technologies Semantic Web Philosopher CS Researcher Ph.D. student Company
expert employee
http://ontorule-project.eu/parrot
The subjects were split in three
balanced groups of six, one group
for each tool to be tested
http://code.google.com/p/ontology-browser/
• The test session was structured as before
• We still used FOAF for the warm up task and
FaBiO for the proper experiment
• We used the same five tasks introduced
previously, giving max. 10 mins. per task
58. User testing session – reprise
18 subjects with
different
expertise in
Semantic Web
technologies Semantic Web Philosopher CS Researcher Ph.D. student Company
expert employee
http://ontorule-project.eu/parrot
The subjects were split in three
balanced groups of six, one group
for each tool to be tested
http://code.google.com/p/ontology-browser/
• The test session was structured as before
• We still used FOAF for the warm up task and
FaBiO for the proper experiment
• We used the same five tasks introduced
previously, giving max. 10 mins. per task
• We recorded the time each subject spent to
accomplish the tasks
59. User testing session – reprise
18 subjects with
different
expertise in
Semantic Web
technologies Semantic Web Philosopher CS Researcher Ph.D. student Company
expert employee
http://ontorule-project.eu/parrot
The subjects were split in three
balanced groups of six, one group
for each tool to be tested
http://code.google.com/p/ontology-browser/
• The test session was structured as before
An “administrator” followed all
• We still used FOAF for the warm up task and
the tests, observing the subjects
FaBiO for the proper experiment
while they were undertaking the
• We used the same five tasks introduced
tasks and keeping time, but did
previously, giving max. 10 mins. per task
not provide any assistance
• We recorded the time each subject spent to Administrator
accomplish the tasks
60. Results – reprise
• 90 tasks in total (5 x 18 subjects)
• 81 were completed successfully, giving an overall success rate of 90%, distributed as follows: 18 (out of 18)
in Task 1, 18 in Task 2, 17 in Task 3, 17 in Task 4 and 11 in Task 5
• The 9 failures were distributed as follows: LODE users: 3, Parrot users: 4, and OWLDoc users: 2
SUS and related sub-measures average scores
Measure LODE Parrot O. Browser
SUS value 73.3 70.8 55.8
subscores
Usability 72.9 68.2 54.7
SUS
Learnability 75.0 81.3 60.4
The average time in minutes:seconds (sd: Standard Deviation)
Task LODE Parrot O. Browser
1 1:13 (sd: 0:45) 5:52 (sd: 3:30) 1:41 (sd: 0:57)
2 0:40 (sd: 0:15) 2:45 (sd: 0:41) 2:03 (sd: 1:52)
3 2:39 (sd: 1:01) 6:48 (sd: 2:31) 4:16 (sd: 2:15)
4 1:30 (sd: 0:46) 5:12 (sd: 2:42) 2:49 (sd: 1:24)
5 7:09 (sd: 3:10) 6:34 (sd: 2:32) 6:46 (sd: 3:14)
Total 13:11 (sd: 4:21) 27:11 (sd: 10:30) 17:35 (sd: 6:41)
61. Results – reprise
• 90 tasks in total (5 x 18 subjects)
• 81 were completed successfully, giving an overall success rate of 90%, distributed as follows: 18 (out of 18)
in Task 1, 18 in Task 2, 17 in Task 3, 17 in Task 4 and 11 in Task 5
• The 9 failures were distributed as follows: LODE users: 3, Parrot users: 4, and OWLDoc users: 2
SUS and related sub-measures average scores
Tukey HSD pairwise comparison performed on
Measure LODE Parrot O. Browser
each measure.
SUS value 73.3 70.8 55.8
subscores
Usability 72.9 68.2 54.7 Only the difference of the Learnability measure
SUS
between Parrot and the Ontology Browser was
Learnability 75.0 81.3 60.4 approaching statistical significance
(F = 2.71; 0.05 < p-value < 0.1)
The average time in minutes:seconds (sd: Standard Deviation)
Task LODE Parrot O. Browser
1 1:13 (sd: 0:45) 5:52 (sd: 3:30) 1:41 (sd: 0:57)
2 0:40 (sd: 0:15) 2:45 (sd: 0:41) 2:03 (sd: 1:52)
3 2:39 (sd: 1:01) 6:48 (sd: 2:31) 4:16 (sd: 2:15)
4 1:30 (sd: 0:46) 5:12 (sd: 2:42) 2:49 (sd: 1:24)
5 7:09 (sd: 3:10) 6:34 (sd: 2:32) 6:46 (sd: 3:14)
Total 13:11 (sd: 4:21) 27:11 (sd: 10:30) 17:35 (sd: 6:41)
62. Results – reprise
• 90 tasks in total (5 x 18 subjects)
• 81 were completed successfully, giving an overall success rate of 90%, distributed as follows: 18 (out of 18)
in Task 1, 18 in Task 2, 17 in Task 3, 17 in Task 4 and 11 in Task 5
• The 9 failures were distributed as follows: LODE users: 3, Parrot users: 4, and OWLDoc users: 2
SUS and related sub-measures average scores
Tukey HSD pairwise comparison performed on
Measure LODE Parrot O. Browser
each measure.
SUS value 73.3 70.8 55.8
subscores
Usability 72.9 68.2 54.7 Only the difference of the Learnability measure
SUS
between Parrot and the Ontology Browser was
Learnability 75.0 81.3 60.4 approaching statistical significance
(F = 2.71; 0.05 < p-value < 0.1)
The average time in minutes:seconds (sd: Standard Deviation)
Task LODE Parrot O. Browser The difference in completion times for Task 3
1 1:13 (sd: 0:45) 5:52 (sd: 3:30) 1:41 (sd: 0:57) between LODE and Parrot was highly significant
(F = 6.33; p-value < 0.01)
2 0:40 (sd: 0:15) 2:45 (sd: 0:41) 2:03 (sd: 1:52)
3 2:39 (sd: 1:01) 6:48 (sd: 2:31) 4:16 (sd: 2:15)
4 1:30 (sd: 0:46) 5:12 (sd: 2:42) 2:49 (sd: 1:24)
5 7:09 (sd: 3:10) 6:34 (sd: 2:32) 6:46 (sd: 3:14)
Total 13:11 (sd: 4:21) 27:11 (sd: 10:30) 17:35 (sd: 6:41)
63. Results – reprise
• 90 tasks in total (5 x 18 subjects)
• 81 were completed successfully, giving an overall success rate of 90%, distributed as follows: 18 (out of 18)
in Task 1, 18 in Task 2, 17 in Task 3, 17 in Task 4 and 11 in Task 5
• The 9 failures were distributed as follows: LODE users: 3, Parrot users: 4, and OWLDoc users: 2
SUS and related sub-measures average scores
Tukey HSD pairwise comparison performed on
Measure LODE Parrot O. Browser
each measure.
SUS value 73.3 70.8 55.8
subscores
Usability 72.9 68.2 54.7 Only the difference of the Learnability measure
SUS
between Parrot and the Ontology Browser was
Learnability 75.0 81.3 60.4 approaching statistical significance
(F = 2.71; 0.05 < p-value < 0.1)
The average time in minutes:seconds (sd: Standard Deviation)
Task LODE Parrot O. Browser The difference in completion times for Task 3
1 1:13 (sd: 0:45) 5:52 (sd: 3:30) 1:41 (sd: 0:57) between LODE and Parrot was highly significant
(F = 6.33; p-value < 0.01)
2 0:40 (sd: 0:15) 2:45 (sd: 0:41) 2:03 (sd: 1:52)
3 2:39 (sd: 1:01) 6:48 (sd: 2:31) 4:16 (sd: 2:15)
The difference between LODE and Parrot on
4 1:30 (sd: 0:46) 5:12 (sd: 2:42) 2:49 (sd: 1:24) the total time spent by users to complete all
5 7:09 (sd: 3:10) 6:34 (sd: 2:32) 6:46 (sd: 3:14) the tasks was statistically significant
Total 13:11 (sd: 4:21) 27:11 (sd: 10:30) 17:35 (sd: 6:41) (F = 5.3; 0.01 < p-value < 0.05)
64. Conclusions (!)
• I introduced LODE, an online service that allows the automatic generation of
ontology documentation starting from annotations and ontological axioms
• We evaluated its usability and effectiveness through two different user testing
sessions, which include also a quantitative comparison with other tools (i.e. Parrot
and Ontology Browser)
• We plan to extend LODE features to include suggestions highlighted by our users
✦ Search function to directly look for and access entities of the ontology
✦ Approaches to access and navigate large ontologies
✦ Statistics about entities
✦ Tree display for classes
• We plan to conduct another user testing session to compare LODE with other
ontology visualisation and browsing tools such as KC-Viz, OWLViz and CropCircle
65. Conclusions (!)
• I introduced LODE, an online service that allows the automatic generation of
ontology documentation starting from annotations and ontological axioms
• We evaluated its usability and effectiveness through two different user testing
sessions, which include also a quantitative comparison with other tools (i.e. Parrot
and Ontology Browser)
• We plan to extend LODE features to include suggestions highlighted by our users
✦ Search function to directly look for and access entities of the ontology
✦ Approaches to access and navigate large ontologies
✦ Statistics about entities
✦ Tree display for classes
• We plan to conduct another user testing session to compare LODE with other
ontology visualisation and browsing tools such as KC-Viz, OWLViz and CropCircle
And, most of all,
66. Conclusions (!)
• I introduced LODE, an online service that allows the automatic generation of
ontology documentation starting from annotations and ontological axioms
• We evaluated its usability and effectiveness through two different user testing
sessions, which include also a quantitative comparison with other tools (i.e. Parrot
and Ontology Browser)
• We plan to extend LODE features to include suggestions highlighted by our users
✦ Search function to directly look for and access entities of the ontology
✦ Approaches to access and navigate large ontologies
✦ Statistics about entities
✦ Tree display for classes
• We plan to conduct another user testing session to compare LODE with other
ontology visualisation and browsing tools such as KC-Viz, OWLViz and CropCircle
And, most of all, we are looking for participants to a 30-minutes online test so as
to obtain more statistically significant results
67. Conclusions (!)
• I introduced LODE, an online service that allows the automatic generation of
ontology documentation starting from annotations and ontological axioms
• We evaluated its usability and effectiveness through two different user testing
sessions, which include also a quantitative comparison with other tools (i.e. Parrot
and Ontology Browser)
• We plan to extend LODE features to include suggestions highlighted by our users
✦ Search function to directly look for and access entities of the ontology
✦ Approaches to access and navigate large ontologies
✦ Statistics about entities
✦ Tree display for classes
• We plan to conduct another user testing session to compare LODE with other
ontology visualisation and browsing tools such as KC-Viz, OWLViz and CropCircle
And, most of all, we are looking for participants to a 30-minutes online test so as
to obtain more statistically significant results us? to help
Would you like
t me!
Ple ase contac
68. Thanks for your attention
o all the
A spec ial thanks t
eady
peo ple who alr
ticipated
helped us and par
essions
in us er testing s
69. Some qualitative findings
• We used a grounded theory approach to extract the most Strauss, A. Corbin, J. (1998). Basics of
relevant concepts from questionnaires, containing the following Qualitative Research Techniques and
questions: Procedures for Developing Grounded
Theory (2nd edition). Sage Publications:
✦ How effectively did [tool X] support you in the previous tasks?
London. ISBN: 978- 0803959408
✦ What were the most useful features of [tool X] to help you realise your tasks?
✦ What were the main weaknesses that [tool X] exhibited in supporting your tasks?
✦ Can you think of any additional features that would have helped you to accomplish your tasks?
• Six subjects working on LODE, two for the Ontology Browser and one for Parrot mentioned
in some form or another the organization and structure of the information presented, although
some in positive, and some in negative terms
• Three subjects for LODE and three for Parrot mentioned search as a serious problem: since
they are both single-page tools, the lack of an in-page mechanism for searching for strings and
the reliance only on the browser's search tool makes looking up strings a more complicated
task than it should be
• Both LODE and Parrot received praises for the links connecting entities across the super-
subclass axis, and LODE also on the class-property axis, while Parrot was criticised three times
for the lack of clarity in the relationships between individuals and their classes
Notas del editor
\n
\n
Let&#x2019;s start talking about interfaces and Semantic Web. It&#x2019;s a common view, as highlighted in several comments, that interfaces are one of the crucial aspects to make Semantic Web usable even to common Web users. Actually, the main point is that any....\n
Let&#x2019;s start talking about interfaces and Semantic Web. It&#x2019;s a common view, as highlighted in several comments, that interfaces are one of the crucial aspects to make Semantic Web usable even to common Web users. Actually, the main point is that any....\n
Let&#x2019;s start talking about interfaces and Semantic Web. It&#x2019;s a common view, as highlighted in several comments, that interfaces are one of the crucial aspects to make Semantic Web usable even to common Web users. Actually, the main point is that any....\n
Let&#x2019;s start talking about interfaces and Semantic Web. It&#x2019;s a common view, as highlighted in several comments, that interfaces are one of the crucial aspects to make Semantic Web usable even to common Web users. Actually, the main point is that any....\n
Of course a lot of work has been done in the past in this direction...\n
Of course a lot of work has been done in the past in this direction...\n
Of course a lot of work has been done in the past in this direction...\n
Of course a lot of work has been done in the past in this direction...\n
Of course a lot of work has been done in the past in this direction...\n
Of course a lot of work has been done in the past in this direction...\n
Of course a lot of work has been done in the past in this direction...\n
Of course a lot of work has been done in the past in this direction...\n
Of course a lot of work has been done in the past in this direction...\n
Of course a lot of work has been done in the past in this direction...\n
Of course a lot of work has been done in the past in this direction...\n
Of course a lot of work has been done in the past in this direction...\n
Of course a lot of work has been done in the past in this direction...\n
Of course a lot of work has been done in the past in this direction...\n
As you know, SW is...\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
We performed a user testing session so as to assess the usability of the documentation produced by LODE when users have to deal with tasks involving ontology understanding and browsing. We involved 13 users that are mostly SW experts and pratictioners. The test was performed without any administrator observing...\n
We performed a user testing session so as to assess the usability of the documentation produced by LODE when users have to deal with tasks involving ontology understanding and browsing. We involved 13 users that are mostly SW experts and pratictioners. The test was performed without any administrator observing...\n
We performed a user testing session so as to assess the usability of the documentation produced by LODE when users have to deal with tasks involving ontology understanding and browsing. We involved 13 users that are mostly SW experts and pratictioners. The test was performed without any administrator observing...\n
We performed a user testing session so as to assess the usability of the documentation produced by LODE when users have to deal with tasks involving ontology understanding and browsing. We involved 13 users that are mostly SW experts and pratictioners. The test was performed without any administrator observing...\n
We performed a user testing session so as to assess the usability of the documentation produced by LODE when users have to deal with tasks involving ontology understanding and browsing. We involved 13 users that are mostly SW experts and pratictioners. The test was performed without any administrator observing...\n
We performed a user testing session so as to assess the usability of the documentation produced by LODE when users have to deal with tasks involving ontology understanding and browsing. We involved 13 users that are mostly SW experts and pratictioners. The test was performed without any administrator observing...\n
We performed a user testing session so as to assess the usability of the documentation produced by LODE when users have to deal with tasks involving ontology understanding and browsing. We involved 13 users that are mostly SW experts and pratictioners. The test was performed without any administrator observing...\n
We performed a user testing session so as to assess the usability of the documentation produced by LODE when users have to deal with tasks involving ontology understanding and browsing. We involved 13 users that are mostly SW experts and pratictioners. The test was performed without any administrator observing...\n
\n
Technically, my presentation should finish here, according to the paper contents. However, after submitting the camera ready, I was really concerned by some reviewers&#x2019; comments, who asked for a comparison with other systems producing ontology documentation. I would like to show you a relevant record of my concerns, which has a predictable end of course.\n
Technically, my presentation should finish here, according to the paper contents. However, after submitting the camera ready, I was really concerned by some reviewers&#x2019; comments, who asked for a comparison with other systems producing ontology documentation. I would like to show you a relevant record of my concerns, which has a predictable end of course.\n
Technically, my presentation should finish here, according to the paper contents. However, after submitting the camera ready, I was really concerned by some reviewers&#x2019; comments, who asked for a comparison with other systems producing ontology documentation. I would like to show you a relevant record of my concerns, which has a predictable end of course.\n