Increase in number of ontologies on Semantic Web and endorsement of OWL as language of discourse for
the Semantic Web has lead to a scenario where research efforts in the field of ontology engineering may be
applied for making the process of ontology development through reuse a viable option for ontology
developers. The advantages are twofold as when existing ontological artefacts from the Semantic Web are
reused, semantic heterogeneity is reduced and help in interoperability which is the essence of Semantic
Web. From the perspective of ontology development advantages of reuse are in terms of cutting down on
cost as well as development life as ontology engineering requires expert domain skills and is time taking
process. We have devised a framework to address challenges associated with reusing ontologies from the
Semantic Web. In this paper we present methods adopted for extraction and integration of concepts across
multiple ontologies.
ONTOLOGY VISUALIZATION PROTÉGÉ TOOLS – A REVIEWijait
The document discusses ontology visualization tools in Protégé. It reviews four main visualization methods used in Protégé tools: indented list, node-link and tree, zoomable, and focus+context. It then examines specific Protégé tools that use each method, including their key features and limitations. The tools discussed are Protégé Class Browser (indented list), Protégé OntoViz and OntoSphere (node-link and tree), Jambalaya (zoomable), and Protégé TGVizTab (focus+context). The document aims to categorize the characteristics of existing Protégé visualization tools to assist in method selection and promote future research.
ONTOLOGY VISUALIZATION PROTÉGÉ TOOLS – A REVIEW ijait
The document discusses ontology visualization tools in Protégé. It reviews four main visualization methods used in Protégé tools: indented list, node-link and tree, zoomable, and focus+context. It then examines specific Protégé tools that use each method, including their key features and limitations. The tools assessed are Protégé Class Browser (indented list), Protégé OntoViz and OntoSphere (node-link and tree), Jambalaya (zoomable), and Protégé TGVizTab (focus+context). The document concludes by summarizing and comparing the visualization characteristics of these Protégé tools.
A Comparative Study Ontology Building Tools for Semantic Web Applications IJwest
This document provides a comparative study of four popular ontology building tools: Protégé 3.4, IsaViz, Apollo, and SWOOP. It discusses the features and functionalities of each tool, including their capabilities for ontology editing, browsing, documentation, import/export of formats, and visualization. The document aims to identify existing ontology tools that are freely available and can be used to develop ontologies for various application domains such as transport, tourism, health, and natural language. It evaluates the tools based on criteria like interoperability, openness, ease of updating/maintaining ontologies, and market penetration.
A Comparative Study Ontology Building Tools for Semantic Web Applications dannyijwest
Ontologies have recently received popularity in the area of knowledge management and knowledge sharing, especially after the evolution of the Semantic Web and its supporting technologies. An ontology defines the terms and concepts (meaning) used to describe and represent an area of knowledge.The aim of this paper is to identify all possible existing ontologies and ontology management tools (Protégé 3.4, Apollo, IsaViz & SWOOP) that are freely available and review them in terms of: a) interoperability, b) openness, c) easiness to update and maintain, d) market status and penetration. The results of the review in ontologies are analyzed for each application area, such as transport, tourism, personal services, health and social services, natural languages and other HCI-related domains. Ontology Building/Management Tools are used by different groups of people for performing diverse tasks. Although each tool provides different functionalities, most of the users just use only one, because they are not able to interchange their ontologies from one tool to another. In addition, we considered the compatibility of different ontologies with different development and management tools. The paper is also concerns the detection of commonalities and differences between the examined ontologies, both on the same domain (application area) and among different domains.
A Comparative Study of Ontology building Tools in Semantic Web Applications dannyijwest
Ontologies have recently received popularity in the area of knowledge management and knowledge sharing,
especially after the evolution of the Semantic Web and its supporting technologies. An ontology defines the terms
and concepts (meaning) used to describe and represent an area of knowledge.The aim of this paper is to identify all
possible existing ontologies and ontology management tools (Protégé 3.4, Apollo, IsaViz & SWOOP) that are freely
available and review them in terms of: a) interoperability, b) openness, c) easiness to update and maintain, d)
market status and penetration. The results of the review in ontologies are analyzed for each application area, such
as transport, tourism, personal services, health and social services, natural languages and other HCI-related
domains. Ontology Building/Management Tools are used by different groups of people for performing diverse tasks.
Although each tool provides different functionalities, most of the users just use only one, because they are not able
to interchange their ontologies from one tool to another. In addition, we considered the compatibility of different
ontologies with different development and management tools. The paper is also concerns the detection of
commonalities and differences between the examined ontologies, both on the same domain (application area) and
among different domains.
SWSN UNIT-3.pptx we can information about swsn professionalgowthamnaidu0986
Ontology engineering involves constructing ontologies through various methods. It begins with defining the scope and evaluating existing ontologies for reuse. Terms are enumerated and organized in a taxonomy with defined properties, facets, and instances. The ontology is checked for anomalies and refined iteratively. Popular tools for ontology development include Protege and WebOnto. Methods like Meth ontology and On-To-Knowledge methodology provide processes for building ontologies from scratch or reusing existing ones. Ontology sharing requires mapping between ontologies to allow interoperability, and libraries exist for storing and accessing ontologies.
Here are the key points about using content-based filtering techniques:
- Content-based filtering relies on analyzing the content or description of items to recommend items similar to what the user has liked in the past. It looks for patterns and regularities in item attributes/descriptions to distinguish highly rated items.
- The item content/descriptions are analyzed automatically by extracting information from sources like web pages, or entered manually from product databases.
- It focuses on objective attributes about items that can be extracted algorithmically, like text analysis of documents.
- However, personal preferences and what makes an item appealing are often subjective qualities not easily extracted algorithmically, like writing style or taste.
- So while content-based filtering can
Association Rule Mining Based Extraction of Semantic Relations Using Markov L...IJwest
Ontology may be a conceptualization of a website into a human understandable, however machine-readable format consisting of entities, attributes, relationships and axioms. Ontologies formalize the intentional aspects of a site, whereas the denotative part is provided by a mental object that contains assertions about instances of concepts and relations. Semantic relation it might be potential to extract the whole family-tree of a outstanding personality employing a resource like Wikipedia. In a way, relations describe the linguistics relationships among the entities involve that is beneficial for a higher understanding of human language. The relation can be identified from the result of concept hierarchy extraction. The existing ontology learning process only produces the result of concept hierarchy extraction. It does not produce the semantic relation between the concepts. Here, we have to do the process of constructing the predicates and also first order logic formula. Here, also find the inference and learning weights using Markov Logic Network. To improve the relation of every input and also improve the relation between the contents we have to propose the concept of ARSRE. This method can find the frequent items between concepts and converting the extensibility of existing lightweight ontologies to formal one. The experimental results can produce the good extraction of semantic relations compared to state-of-art method.
ONTOLOGY VISUALIZATION PROTÉGÉ TOOLS – A REVIEWijait
The document discusses ontology visualization tools in Protégé. It reviews four main visualization methods used in Protégé tools: indented list, node-link and tree, zoomable, and focus+context. It then examines specific Protégé tools that use each method, including their key features and limitations. The tools discussed are Protégé Class Browser (indented list), Protégé OntoViz and OntoSphere (node-link and tree), Jambalaya (zoomable), and Protégé TGVizTab (focus+context). The document aims to categorize the characteristics of existing Protégé visualization tools to assist in method selection and promote future research.
ONTOLOGY VISUALIZATION PROTÉGÉ TOOLS – A REVIEW ijait
The document discusses ontology visualization tools in Protégé. It reviews four main visualization methods used in Protégé tools: indented list, node-link and tree, zoomable, and focus+context. It then examines specific Protégé tools that use each method, including their key features and limitations. The tools assessed are Protégé Class Browser (indented list), Protégé OntoViz and OntoSphere (node-link and tree), Jambalaya (zoomable), and Protégé TGVizTab (focus+context). The document concludes by summarizing and comparing the visualization characteristics of these Protégé tools.
A Comparative Study Ontology Building Tools for Semantic Web Applications IJwest
This document provides a comparative study of four popular ontology building tools: Protégé 3.4, IsaViz, Apollo, and SWOOP. It discusses the features and functionalities of each tool, including their capabilities for ontology editing, browsing, documentation, import/export of formats, and visualization. The document aims to identify existing ontology tools that are freely available and can be used to develop ontologies for various application domains such as transport, tourism, health, and natural language. It evaluates the tools based on criteria like interoperability, openness, ease of updating/maintaining ontologies, and market penetration.
A Comparative Study Ontology Building Tools for Semantic Web Applications dannyijwest
Ontologies have recently received popularity in the area of knowledge management and knowledge sharing, especially after the evolution of the Semantic Web and its supporting technologies. An ontology defines the terms and concepts (meaning) used to describe and represent an area of knowledge.The aim of this paper is to identify all possible existing ontologies and ontology management tools (Protégé 3.4, Apollo, IsaViz & SWOOP) that are freely available and review them in terms of: a) interoperability, b) openness, c) easiness to update and maintain, d) market status and penetration. The results of the review in ontologies are analyzed for each application area, such as transport, tourism, personal services, health and social services, natural languages and other HCI-related domains. Ontology Building/Management Tools are used by different groups of people for performing diverse tasks. Although each tool provides different functionalities, most of the users just use only one, because they are not able to interchange their ontologies from one tool to another. In addition, we considered the compatibility of different ontologies with different development and management tools. The paper is also concerns the detection of commonalities and differences between the examined ontologies, both on the same domain (application area) and among different domains.
A Comparative Study of Ontology building Tools in Semantic Web Applications dannyijwest
Ontologies have recently received popularity in the area of knowledge management and knowledge sharing,
especially after the evolution of the Semantic Web and its supporting technologies. An ontology defines the terms
and concepts (meaning) used to describe and represent an area of knowledge.The aim of this paper is to identify all
possible existing ontologies and ontology management tools (Protégé 3.4, Apollo, IsaViz & SWOOP) that are freely
available and review them in terms of: a) interoperability, b) openness, c) easiness to update and maintain, d)
market status and penetration. The results of the review in ontologies are analyzed for each application area, such
as transport, tourism, personal services, health and social services, natural languages and other HCI-related
domains. Ontology Building/Management Tools are used by different groups of people for performing diverse tasks.
Although each tool provides different functionalities, most of the users just use only one, because they are not able
to interchange their ontologies from one tool to another. In addition, we considered the compatibility of different
ontologies with different development and management tools. The paper is also concerns the detection of
commonalities and differences between the examined ontologies, both on the same domain (application area) and
among different domains.
SWSN UNIT-3.pptx we can information about swsn professionalgowthamnaidu0986
Ontology engineering involves constructing ontologies through various methods. It begins with defining the scope and evaluating existing ontologies for reuse. Terms are enumerated and organized in a taxonomy with defined properties, facets, and instances. The ontology is checked for anomalies and refined iteratively. Popular tools for ontology development include Protege and WebOnto. Methods like Meth ontology and On-To-Knowledge methodology provide processes for building ontologies from scratch or reusing existing ones. Ontology sharing requires mapping between ontologies to allow interoperability, and libraries exist for storing and accessing ontologies.
Here are the key points about using content-based filtering techniques:
- Content-based filtering relies on analyzing the content or description of items to recommend items similar to what the user has liked in the past. It looks for patterns and regularities in item attributes/descriptions to distinguish highly rated items.
- The item content/descriptions are analyzed automatically by extracting information from sources like web pages, or entered manually from product databases.
- It focuses on objective attributes about items that can be extracted algorithmically, like text analysis of documents.
- However, personal preferences and what makes an item appealing are often subjective qualities not easily extracted algorithmically, like writing style or taste.
- So while content-based filtering can
Association Rule Mining Based Extraction of Semantic Relations Using Markov L...IJwest
Ontology may be a conceptualization of a website into a human understandable, however machine-readable format consisting of entities, attributes, relationships and axioms. Ontologies formalize the intentional aspects of a site, whereas the denotative part is provided by a mental object that contains assertions about instances of concepts and relations. Semantic relation it might be potential to extract the whole family-tree of a outstanding personality employing a resource like Wikipedia. In a way, relations describe the linguistics relationships among the entities involve that is beneficial for a higher understanding of human language. The relation can be identified from the result of concept hierarchy extraction. The existing ontology learning process only produces the result of concept hierarchy extraction. It does not produce the semantic relation between the concepts. Here, we have to do the process of constructing the predicates and also first order logic formula. Here, also find the inference and learning weights using Markov Logic Network. To improve the relation of every input and also improve the relation between the contents we have to propose the concept of ARSRE. This method can find the frequent items between concepts and converting the extensibility of existing lightweight ontologies to formal one. The experimental results can produce the good extraction of semantic relations compared to state-of-art method.
Association Rule Mining Based Extraction of Semantic Relations Using Markov ...dannyijwest
Ontology may be a conceptualization of a website into a human understandable, however machine-
readable format consisting of entities, attributes, relationships and axioms. Ontologies formalize the
intentional aspects of a site, whereas the denotative part is provided by a mental object that contains
assertions about instances of concepts and relations. Semantic relation it might be potential to extract the
whole family-tree of a outstanding personality employing a resource like Wikipedia. In a way, relations
describe the linguistics relationships among the entities involve that is beneficial for a higher
understanding of human language. The relation can be identified from the result of concept hierarchy
extraction. The existing ontology learning process only produces the result of concept hierarchy extraction.
It does not produce the semantic relation between the concepts. Here, we have to do the process of
constructing the predicates and also first order logic formula. Here, also find the inference and learning
weights using Markov Logic Network. To improve the relation of every input and also improve the relation
between the contents we have to propose the concept of ARSRE.
A Comparative Study of Recent Ontology Visualization Tools with a Case of Dia...IJORCS
Ontology is a conceptualization of a domain into machine readable format. Ontologies are becoming increasingly popular modelling schemas for knowledge management services and applications. Focus on developing tools to graphically visualise ontologies is rising to aid their assessment and analysis. Graph visualisation helps to browse and comprehend the structure of ontologies. A number of ontology visualizations exist that have been embedded in ontology management tools. The primary goal of this paper is to analyze recently implemented ontology visualization tools and their contributions in the enrichment of users’ cognitive support. This work also presents the preliminary results of an evaluation of three visualization tools to determine the suitability of each method for end user applications where ontologies are used as browsing aids with a case of Diabetes data
A novel method for generating an elearning ontologyIJDKP
The Semantic Web provides a common framework that allows data to be shared and reused across
applications, enterprises, and community boundaries. The existing web applications need to express
semantics that can be extracted from users' navigation and content, in order to fulfill users' needs. Elearning
has specific requirements that can be satisfied through the extraction of semantics from learning
management systems (LMS) that use relational databases (RDB) as backend. In this paper, we propose
transformation rules for building owl ontology from the RDB of the open source LMS Moodle. It allows
transforming all possible cases in RDBs into ontological constructs. The proposed rules are enriched by
analyzing stored data to detect disjointness and totalness constraints in hierarchies, and calculating the
participation level of tables in n-ary relations. In addition, our technique is generic; hence it can be applied
to any RDB.
Ontology languages are used in modelling the semantics of concepts within a particular domain and the relationships between those concepts. The Semantic Web standard provides a number of modelling languages that differ in their level of expressivity and are organized in a Semantic Web Stack in such a way that each language level builds on the expressivity of the other. There are several problems when one attempts to use independently developed ontologies. When existing ontologies are adapted for new purposes it requires that certain operations are performed on them. These operations are currently performed in a semi-automated manner. This paper seeks to model categorically the syntax and semantics of RDF ontology as a step towards the formalization of ontological operations using category theory.
Semantic Web: Technolgies and Applications for Real-WorldAmit Sheth
Amit Sheth and Susie Stephens, "Semantic Web: Technolgies and Applications for Real-World," Tutorial at 2007 World Wide Web Conference, Banff, Canada.
Tutorial discusses technologies and deployed real-world applications through 2007.
Tutorial description at: http://www2007.org/tutorial-T11.php
PROPOSAL OF AN HYBRID METHODOLOGY FOR ONTOLOGY DEVELOPMENT BY EXTENDING THE P...ijitcs
W3C’s Semantic Web intents a common framework that allows data to be shared and reused across
application and enterprise. The semantic web and its related technologies are the main directions of
future web development where machine-processable information which supports user tasks. Ontologies are
playing the vital role in Semantic Web. Researches on Ontology engineering had pointed out that an effective
ontology application development methodology with integrated tool support is mandatory for its success. .
Potential benefits are there to ontology engineering in making the toolset of Model Driven Architecture
applicable to ontology modeling. Since Software and Ontology engineering are two complimentary
branches, the scope of extension of the well proven methodologies and UML based modeling approaches
used in software engineering to ontology engineering can bridge the gap between the engineering branches.
This research paper is an attempt to suggest an exclusive hybrid methodology for ontology development from
existing matured software engineering. Philosophical and engineering aspects of the newly derived
methodology have been described clearly An attempt has been made for the application of proposed
methodology with protégé editor. The full-fledged implementation of an domain ontology and its validation
is the future research direction.
SEMANTIC INTEGRATION FOR AUTOMATIC ONTOLOGY MAPPING cscpconf
In the last decade, ontologies have played a key technology role for information sharing and agents interoperability in different application domains. In semantic web domain, ontologies are efficiently used toface the great challenge of representing the semantics of data, in order to bring the actual web to its full
power and hence, achieve its objective. However, using ontologies as common and shared vocabularies requires a certain degree of interoperability between them. To confront this requirement, mapping ontologies is a solution that is not to be avoided. In deed, ontology mapping build a meta layer that allows different applications and information systems to access and share their informations, of course, after resolving the different forms of syntactic, semantic and lexical mismatches. In the contribution presented in this paper, we have integrated the semantic aspect based on an external lexical resource, wordNet, to design a new algorithm for fully automatic ontology mapping. This fully automatic character features the
main difference of our contribution with regards to the most of the existing semi-automatic algorithms of ontology mapping, such as Chimaera, Prompt, Onion, Glue, etc. To better enhance the performances of our algorithm, the mapping discovery stage is based on the combination of two sub-modules. The former
analysis the concept’s names and the later analysis their properties. Each one of these two sub-modules is
it self based on the combination of lexical and semantic similarity measures.
Towards From Manual to Automatic Semantic Annotation: Based on Ontology Eleme...IJwest
This document describes a proposed system for automatic semantic annotation of web documents based on ontology elements and relationships. It begins with an introduction to semantic web and annotation. The proposed system architecture matches topics in text to entities in an ontology document. It utilizes WordNet as a lexical ontology and ontology resources to extract knowledge from text and generate annotations. The main components of the system include a text analyzer, ontology parser, and knowledge extractor. The system aims to automatically generate metadata to improve information retrieval for non-technical users.
This document describes a proposed method for subontology-assisted web-based e-learning for resource management. Key points include:
1. Semantic mapping is used to integrate heterogeneous e-learning databases by mapping relational schemas to a global ontology.
2. Subontologies (SubOs) are context-specific portions of the full ontology that are evolved over time based on locality of resource reuse.
3. A SubO-based approach is used to achieve adaptive and efficient resource management and reuse by matching user requests to SubOs.
Building a multilingual ontology for education domain using monto methodCSITiaesprime
Ontologies are emerging technology in building knowledge based information
retrieval systems. It is used to conceptualize the information in human
understandable manner. Knowledge based information retrieval are widely
used in the domain like Education, Artificial Intelligence, Healthcare and so
on. It is important to provide multilingual information of those domains to
facilitate multi-language users. In this paper, we propose a multilingual
ontology (MOnto) methodology to develop multilingual ontology applications
for education domain. New algorithms are proposed for merging and mapping
multilingual ontologies.
Presentation made in the context of the FAO AIMS Webinar titled “Knowledge Organization Systems (KOS): Management of Classification Systems in the case of Organic.Edunet” (http://aims.fao.org/community/blogs/new-webinaraims-knowledge-organization-systems-kos-management-classification-systems)
21/2/2014
An Incremental Method For Meaning Elicitation Of A Domain OntologyAudrey Britton
This document describes MELIS (Meaning Elicitation and Lexical Integration System), a tool developed to support the annotation of data sources with conceptual and lexical information during the ontology generation process in MOMIS (Mediator envirOnment for Multiple Information Systems). MELIS improves upon MOMIS by enabling more automated annotation of data sources and by extracting relationships between lexical elements using lexical and domain knowledge to provide a richer semantics. The document outlines the MOMIS architecture integrated with MELIS and explains how MELIS supports key steps in the ontology creation process, including source annotation, common thesaurus generation, and relationship extraction.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
This document discusses using personalized ontologies to improve web information gathering by representing user profiles. It proposes a model that constructs personalized ontologies by adopting user feedback from a world knowledge base. The model also uses users' local instance repositories to discover background knowledge and populate the ontologies. The proposed ontology model is evaluated against benchmark models through experiments using a large standard dataset.
Although of the semantic web technologies utilization in the learning development field is a new research area, some authors have already proposed their idea of how an effective that operate. Specifically, from analysis of the literature in the field, we have identified three different types of existing applications that actually employ these technologies to support learning. These applications aim at: Enhancing the learning objects reusability by linking them to an ontological description of the domain, or, more generally, describe relevant dimension of the learning process in an ontology, then; providing a comprehensive authoring system to retrieve and organize web material into a learning course, and constructing advanced strategies to present annotated resources to the user, in the form of browsing facilities, narrative generation and final rendering of a course. On difference with the approaches cited above, here we propose an approach that is modeled on narrative studies and on their transposition in the digital world. In the rest of the paper, we present the theoretical basis that inspires this approach, and show some examples that are guiding our implementation and testing of these ideas within e-learning. By emerging the idea of the ontologies are recognized as the most important component in achieving semantic interoperability of e-learning resources. The benefits of their use have already been recognized in the learning technology community. In order to better define different aspects of ontology applications in e-learning, researchers have given several classifications of ontologies. We refer to a general one given in that differentiates between three dimensions ontologies can describe: content, context, and structure. Most of the present research has been dedicated to the first group of ontologies. A well-known example of such an ontology is based on the ACM Computer Classification System (ACM CCS) and defined by Resource Description Framework Schema (RDFS). It’s used in the MOODLE to classify learning objects with a goal to improve searching. The chapter will cover the terms of the semantic web and e-learning systems design and management in e-learning (MOODLE) and some of studies depend on e-learning and semantic web, thus the tools will be used in this paper, and lastly we shall discuss the expected contribution. The special attention will be putted on the above topics.
The objective of this webinar is to provide a brief overview of the Knowledge Organization Systems (KOS) and the tools used for managing them. The presentation will focus on the management of the multilingual Organic.Edunet ontology as a case study. In this context it will present aspects such as the collaborative work, multilinguality needs and update of the concepts using an online KOS management tool (MoKi).
This document presents an approach for extracting ontologies from heterogeneous documents. It discusses how ontologies play an important role in the semantic web for knowledge management and interoperability. The authors describe a clustering algorithm that identifies concepts and relationships by processing sentences from input documents. Key steps include marking the first word of each sentence as a parent concept and subsequent words as child concepts. They also describe a harmonization process to integrate extracted ontologies with existing knowledge bases by matching and merging corresponding concepts and relations. The authors applied their approach to documents in text, document and PDF formats, and were able to extract concept hierarchies and relationships from the input files.
Towards Ontology Development Based on Relational Databaseijbuiiir1
Ontology is defined as the formal explicit specification of a shared conceptualization. It has been widely used in almost all fields especially artificial intelligence, data mining, and semantic web etc. It is constructed using various set of resources. Now it has become a very important task to improve the efficiency of ontology construction. In order to improve the efficiency, need an automated method of building ontology from database resource. Since manual construction is found to be erroneous and not up to the expectation, automatic construction of ontology from database is innovated. Then the construction rules for ontology building from relational data sources are put forward. Finally, ontology for �automated building of ontology from relational data sources� has been implemented
The document describes an ontology evolution process for classifying web services. It uses three techniques - TF/IDF, web context extraction, and free text descriptor verification - to analyze web service descriptions and automatically generate concepts and relationships for the ontology. TF/IDF and web context extraction are used to identify significant concepts from the descriptions. The free text descriptor is then used to validate these concepts and resolve any conflicts with the existing ontology. The combined approach aims to accurately define and evolve the ontology over time as new web services are added.
This document discusses applying semantic web technologies to enhance the services of e-learning systems. It proposes developing a semantic learning management system (S-LMS) based on technologies like XML, RDF, OWL and SPARQL to automate and accurately search for information on e-learning systems like Moodle. The S-LMS would add semantic capabilities to allow students to search for learning resources based on semantics and provide personalized, customized content tailored to individual needs. It presents applying ontologies and metadata to Moodle in order to define domains and describe learning content in a way that improves search, interoperability and reusability of educational resources.
The document discusses a proposal to automatically generate knowledge chains (KCs) to recommend to learners based on monitoring their web navigation. A software agent would observe the pages a learner visits and the time spent on each. It would then classify page content using an ontology and web mining techniques. Based on the related concepts identified across visited pages and the navigation path, the agent aims to build potential KCs representing that knowledge to recommend back to the learner. This approach intends to motivate learners to build their personal knowledge by creating KCs for them based on their own browsing behavior and content.
Association Rule Mining Based Extraction of Semantic Relations Using Markov ...dannyijwest
Ontology may be a conceptualization of a website into a human understandable, however machine-
readable format consisting of entities, attributes, relationships and axioms. Ontologies formalize the
intentional aspects of a site, whereas the denotative part is provided by a mental object that contains
assertions about instances of concepts and relations. Semantic relation it might be potential to extract the
whole family-tree of a outstanding personality employing a resource like Wikipedia. In a way, relations
describe the linguistics relationships among the entities involve that is beneficial for a higher
understanding of human language. The relation can be identified from the result of concept hierarchy
extraction. The existing ontology learning process only produces the result of concept hierarchy extraction.
It does not produce the semantic relation between the concepts. Here, we have to do the process of
constructing the predicates and also first order logic formula. Here, also find the inference and learning
weights using Markov Logic Network. To improve the relation of every input and also improve the relation
between the contents we have to propose the concept of ARSRE.
A Comparative Study of Recent Ontology Visualization Tools with a Case of Dia...IJORCS
Ontology is a conceptualization of a domain into machine readable format. Ontologies are becoming increasingly popular modelling schemas for knowledge management services and applications. Focus on developing tools to graphically visualise ontologies is rising to aid their assessment and analysis. Graph visualisation helps to browse and comprehend the structure of ontologies. A number of ontology visualizations exist that have been embedded in ontology management tools. The primary goal of this paper is to analyze recently implemented ontology visualization tools and their contributions in the enrichment of users’ cognitive support. This work also presents the preliminary results of an evaluation of three visualization tools to determine the suitability of each method for end user applications where ontologies are used as browsing aids with a case of Diabetes data
A novel method for generating an elearning ontologyIJDKP
The Semantic Web provides a common framework that allows data to be shared and reused across
applications, enterprises, and community boundaries. The existing web applications need to express
semantics that can be extracted from users' navigation and content, in order to fulfill users' needs. Elearning
has specific requirements that can be satisfied through the extraction of semantics from learning
management systems (LMS) that use relational databases (RDB) as backend. In this paper, we propose
transformation rules for building owl ontology from the RDB of the open source LMS Moodle. It allows
transforming all possible cases in RDBs into ontological constructs. The proposed rules are enriched by
analyzing stored data to detect disjointness and totalness constraints in hierarchies, and calculating the
participation level of tables in n-ary relations. In addition, our technique is generic; hence it can be applied
to any RDB.
Ontology languages are used in modelling the semantics of concepts within a particular domain and the relationships between those concepts. The Semantic Web standard provides a number of modelling languages that differ in their level of expressivity and are organized in a Semantic Web Stack in such a way that each language level builds on the expressivity of the other. There are several problems when one attempts to use independently developed ontologies. When existing ontologies are adapted for new purposes it requires that certain operations are performed on them. These operations are currently performed in a semi-automated manner. This paper seeks to model categorically the syntax and semantics of RDF ontology as a step towards the formalization of ontological operations using category theory.
Semantic Web: Technolgies and Applications for Real-WorldAmit Sheth
Amit Sheth and Susie Stephens, "Semantic Web: Technolgies and Applications for Real-World," Tutorial at 2007 World Wide Web Conference, Banff, Canada.
Tutorial discusses technologies and deployed real-world applications through 2007.
Tutorial description at: http://www2007.org/tutorial-T11.php
PROPOSAL OF AN HYBRID METHODOLOGY FOR ONTOLOGY DEVELOPMENT BY EXTENDING THE P...ijitcs
W3C’s Semantic Web intents a common framework that allows data to be shared and reused across
application and enterprise. The semantic web and its related technologies are the main directions of
future web development where machine-processable information which supports user tasks. Ontologies are
playing the vital role in Semantic Web. Researches on Ontology engineering had pointed out that an effective
ontology application development methodology with integrated tool support is mandatory for its success. .
Potential benefits are there to ontology engineering in making the toolset of Model Driven Architecture
applicable to ontology modeling. Since Software and Ontology engineering are two complimentary
branches, the scope of extension of the well proven methodologies and UML based modeling approaches
used in software engineering to ontology engineering can bridge the gap between the engineering branches.
This research paper is an attempt to suggest an exclusive hybrid methodology for ontology development from
existing matured software engineering. Philosophical and engineering aspects of the newly derived
methodology have been described clearly An attempt has been made for the application of proposed
methodology with protégé editor. The full-fledged implementation of an domain ontology and its validation
is the future research direction.
SEMANTIC INTEGRATION FOR AUTOMATIC ONTOLOGY MAPPING cscpconf
In the last decade, ontologies have played a key technology role for information sharing and agents interoperability in different application domains. In semantic web domain, ontologies are efficiently used toface the great challenge of representing the semantics of data, in order to bring the actual web to its full
power and hence, achieve its objective. However, using ontologies as common and shared vocabularies requires a certain degree of interoperability between them. To confront this requirement, mapping ontologies is a solution that is not to be avoided. In deed, ontology mapping build a meta layer that allows different applications and information systems to access and share their informations, of course, after resolving the different forms of syntactic, semantic and lexical mismatches. In the contribution presented in this paper, we have integrated the semantic aspect based on an external lexical resource, wordNet, to design a new algorithm for fully automatic ontology mapping. This fully automatic character features the
main difference of our contribution with regards to the most of the existing semi-automatic algorithms of ontology mapping, such as Chimaera, Prompt, Onion, Glue, etc. To better enhance the performances of our algorithm, the mapping discovery stage is based on the combination of two sub-modules. The former
analysis the concept’s names and the later analysis their properties. Each one of these two sub-modules is
it self based on the combination of lexical and semantic similarity measures.
Towards From Manual to Automatic Semantic Annotation: Based on Ontology Eleme...IJwest
This document describes a proposed system for automatic semantic annotation of web documents based on ontology elements and relationships. It begins with an introduction to semantic web and annotation. The proposed system architecture matches topics in text to entities in an ontology document. It utilizes WordNet as a lexical ontology and ontology resources to extract knowledge from text and generate annotations. The main components of the system include a text analyzer, ontology parser, and knowledge extractor. The system aims to automatically generate metadata to improve information retrieval for non-technical users.
This document describes a proposed method for subontology-assisted web-based e-learning for resource management. Key points include:
1. Semantic mapping is used to integrate heterogeneous e-learning databases by mapping relational schemas to a global ontology.
2. Subontologies (SubOs) are context-specific portions of the full ontology that are evolved over time based on locality of resource reuse.
3. A SubO-based approach is used to achieve adaptive and efficient resource management and reuse by matching user requests to SubOs.
Building a multilingual ontology for education domain using monto methodCSITiaesprime
Ontologies are emerging technology in building knowledge based information
retrieval systems. It is used to conceptualize the information in human
understandable manner. Knowledge based information retrieval are widely
used in the domain like Education, Artificial Intelligence, Healthcare and so
on. It is important to provide multilingual information of those domains to
facilitate multi-language users. In this paper, we propose a multilingual
ontology (MOnto) methodology to develop multilingual ontology applications
for education domain. New algorithms are proposed for merging and mapping
multilingual ontologies.
Presentation made in the context of the FAO AIMS Webinar titled “Knowledge Organization Systems (KOS): Management of Classification Systems in the case of Organic.Edunet” (http://aims.fao.org/community/blogs/new-webinaraims-knowledge-organization-systems-kos-management-classification-systems)
21/2/2014
An Incremental Method For Meaning Elicitation Of A Domain OntologyAudrey Britton
This document describes MELIS (Meaning Elicitation and Lexical Integration System), a tool developed to support the annotation of data sources with conceptual and lexical information during the ontology generation process in MOMIS (Mediator envirOnment for Multiple Information Systems). MELIS improves upon MOMIS by enabling more automated annotation of data sources and by extracting relationships between lexical elements using lexical and domain knowledge to provide a richer semantics. The document outlines the MOMIS architecture integrated with MELIS and explains how MELIS supports key steps in the ontology creation process, including source annotation, common thesaurus generation, and relationship extraction.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
This document discusses using personalized ontologies to improve web information gathering by representing user profiles. It proposes a model that constructs personalized ontologies by adopting user feedback from a world knowledge base. The model also uses users' local instance repositories to discover background knowledge and populate the ontologies. The proposed ontology model is evaluated against benchmark models through experiments using a large standard dataset.
Although of the semantic web technologies utilization in the learning development field is a new research area, some authors have already proposed their idea of how an effective that operate. Specifically, from analysis of the literature in the field, we have identified three different types of existing applications that actually employ these technologies to support learning. These applications aim at: Enhancing the learning objects reusability by linking them to an ontological description of the domain, or, more generally, describe relevant dimension of the learning process in an ontology, then; providing a comprehensive authoring system to retrieve and organize web material into a learning course, and constructing advanced strategies to present annotated resources to the user, in the form of browsing facilities, narrative generation and final rendering of a course. On difference with the approaches cited above, here we propose an approach that is modeled on narrative studies and on their transposition in the digital world. In the rest of the paper, we present the theoretical basis that inspires this approach, and show some examples that are guiding our implementation and testing of these ideas within e-learning. By emerging the idea of the ontologies are recognized as the most important component in achieving semantic interoperability of e-learning resources. The benefits of their use have already been recognized in the learning technology community. In order to better define different aspects of ontology applications in e-learning, researchers have given several classifications of ontologies. We refer to a general one given in that differentiates between three dimensions ontologies can describe: content, context, and structure. Most of the present research has been dedicated to the first group of ontologies. A well-known example of such an ontology is based on the ACM Computer Classification System (ACM CCS) and defined by Resource Description Framework Schema (RDFS). It’s used in the MOODLE to classify learning objects with a goal to improve searching. The chapter will cover the terms of the semantic web and e-learning systems design and management in e-learning (MOODLE) and some of studies depend on e-learning and semantic web, thus the tools will be used in this paper, and lastly we shall discuss the expected contribution. The special attention will be putted on the above topics.
The objective of this webinar is to provide a brief overview of the Knowledge Organization Systems (KOS) and the tools used for managing them. The presentation will focus on the management of the multilingual Organic.Edunet ontology as a case study. In this context it will present aspects such as the collaborative work, multilinguality needs and update of the concepts using an online KOS management tool (MoKi).
This document presents an approach for extracting ontologies from heterogeneous documents. It discusses how ontologies play an important role in the semantic web for knowledge management and interoperability. The authors describe a clustering algorithm that identifies concepts and relationships by processing sentences from input documents. Key steps include marking the first word of each sentence as a parent concept and subsequent words as child concepts. They also describe a harmonization process to integrate extracted ontologies with existing knowledge bases by matching and merging corresponding concepts and relations. The authors applied their approach to documents in text, document and PDF formats, and were able to extract concept hierarchies and relationships from the input files.
Towards Ontology Development Based on Relational Databaseijbuiiir1
Ontology is defined as the formal explicit specification of a shared conceptualization. It has been widely used in almost all fields especially artificial intelligence, data mining, and semantic web etc. It is constructed using various set of resources. Now it has become a very important task to improve the efficiency of ontology construction. In order to improve the efficiency, need an automated method of building ontology from database resource. Since manual construction is found to be erroneous and not up to the expectation, automatic construction of ontology from database is innovated. Then the construction rules for ontology building from relational data sources are put forward. Finally, ontology for �automated building of ontology from relational data sources� has been implemented
The document describes an ontology evolution process for classifying web services. It uses three techniques - TF/IDF, web context extraction, and free text descriptor verification - to analyze web service descriptions and automatically generate concepts and relationships for the ontology. TF/IDF and web context extraction are used to identify significant concepts from the descriptions. The free text descriptor is then used to validate these concepts and resolve any conflicts with the existing ontology. The combined approach aims to accurately define and evolve the ontology over time as new web services are added.
This document discusses applying semantic web technologies to enhance the services of e-learning systems. It proposes developing a semantic learning management system (S-LMS) based on technologies like XML, RDF, OWL and SPARQL to automate and accurately search for information on e-learning systems like Moodle. The S-LMS would add semantic capabilities to allow students to search for learning resources based on semantics and provide personalized, customized content tailored to individual needs. It presents applying ontologies and metadata to Moodle in order to define domains and describe learning content in a way that improves search, interoperability and reusability of educational resources.
The document discusses a proposal to automatically generate knowledge chains (KCs) to recommend to learners based on monitoring their web navigation. A software agent would observe the pages a learner visits and the time spent on each. It would then classify page content using an ontology and web mining techniques. Based on the related concepts identified across visited pages and the navigation path, the agent aims to build potential KCs representing that knowledge to recommend back to the learner. This approach intends to motivate learners to build their personal knowledge by creating KCs for them based on their own browsing behavior and content.
Similar a An Approach to Owl Concept Extraction and Integration Across Multiple Ontologies (20)
IJWEST CFP (9).pdfCALL FOR ARTICLES...! IS INDEXING JOURNAL...! Internationa...dannyijwest
Paper Submission
Authors are invited to submit papers for this journal through Email: ijwestjournal@airccse.org / ijwest@aircconline.com or through Submission System.
mportant Dates
Submission Deadline : June 01, 2024
Notification :July 01, 2024
Final Manuscript Due : July 08, 2024
Publication Date : Determined by the Editor-in-Chief
Here's where you can reach us : ijwestjournal@yahoo.com or ijwestjournal@airccse.org or ijwest@aircconline.com
Cybercrimes in the Darknet and Their Detections: A Comprehensive Analysis and...dannyijwest
Although the Dark web was originally used for maintaining privacy-sensitive communication for business or intelligence services for defence, government and business organizations, fighting against censorship and blocked content, later, the advantage of technologies behind the Dark web were abused by criminals to conduct crimes which involve drug dealing to the contract of assassinations in a widespread manner. Since the communication remains secure and untraceable, criminals can easily use dark web service via The Onion Router (TOR), can hide their illegal motives and can conceal their criminal activities. This makes it very difficult to monitor and detect cybercrimes over the dark web. With the evolution of machine learning, natural language processing techniques, computational big data applications and hardware, there is a growing interest in exploiting dark web data to monitor and detect criminal activities. Due to the anonymity provided by the Dark Web, the rapid disappearance and the change of the uniform resource locators (URLs) of the resources, it is not as easy to crawl the Drak web and get the data as the usual surface web which limits the researchers and law enforcement agencies to analyse the data. Therefore, there is an urgent need to study the technology behind the Dark web, its widespread abuse, its impact on society and the existing systems, to identify the sources of drug deal or terrorism activities. In this research, we analysed the predominant darker sides of the world wide web (WWW), their volumes, their contents and their ratios. We have performed the analysis of the larger malicious or hidden activities that occupy the major portions of the Dark net; tools and techniques used to identify cybercrimes which happen inside the dark web. We applied a systematic literature review (SLR) approach on the resources where the actual dark net data have been used for research purposes in several areas. From this SLR, we identified the approaches (tools and algorithms) which have been applied to analyse the Dark net data, the key gaps as well as the key contributions of the existing works in the literature. In our study, we find the main challenges to crawl the dark web and collect forum data are: scalability of crawler, content selection trade off, and social obligation for TOR crawler and the limitations of techniques used in automatic sentiment analysis to understand criminals’ forums and thereby monitor the forums. From the comprehensive analysis of existing tools, our study summarizes the most tools. However the forum topics rapidly change as their sources changes; criminals inject noises to obfuscate the forum’s main topic and thus remain undetectable. Therefore supervised techniques fail to address the above challenges. Semi-supervised techniques would be an interesting research direction.
FFO: Forest Fire Ontology and Reasoning System for Enhanced Alert and Managem...dannyijwest
Forest fires or wildfires pose a serious threat to property, lives, and the environment. Early detection and mitigation of such emergencies, therefore, play an important role in reducing the severity of the impact caused by wildfire. Unfortunately, there is often an improper or delayed mechanism for forest fire detection which leads to destruction and losses. These anomalies in detection can be due to defects in sensors or a lack of proper information interoperability among the sensors deployed in forests. This paper presents a lightweight ontological framework to address these challenges. Interoperability issues are caused due to heterogeneity in technologies used and heterogeneous data created by different sensors. Therefore, through the proposed Forest Fire Detection and Management Ontology (FFO), we introduce a standardized model to share and reuse knowledge and data across different sensors. The proposed ontology is validated using semantic reasoning and query processing. The reasoning and querying processes are performed on real-time data gathered from experiments conducted in a forest and stored as RDF triples based on the design of the ontology. The outcomes of queries and inferences from reasoning demonstrate that FFO is feasible for the early detection of wildfire and facilitates efficient process management subsequent to detection.
Call For Papers-10th International Conference on Artificial Intelligence and ...dannyijwest
** Registration is currently open **
Call for Research Papers!!!
Free – Extended Paper will be published as free of cost.
10th International Conference on Artificial Intelligence and Applications (AI 2024)
July 20 ~ 21, 2024, Toronto, Canada
https://csty2024.org/ai/index
Submission Deadline: May 11, 2024
Contact Us
Here's where you can reach us : ai@csty2024.org or ai.conference@yahoo.com
Submission System
https://csty2024.org/submission/index.php
#artificialintelligence #softcomputing #machinelearning #technology #datascience #python #deeplearning #tech #robotics #innovation #bigdata #coding #iot #computerscience #data #dataanalytics #engineering #robot #datascientist #software #automation #analytics #ml #pythonprogramming #programmer #digitaltransformation #developer #promptengineering #generativeai #genai #chatgpt
CALL FOR ARTICLES...! IS INDEXING JOURNAL...! International Journal of Web &...dannyijwest
Paper Submission
Authors are invited to submit papers for this journal through Email: ijwest@aircconline.com or through Submission System. Submissions must be original and should not have been published previously or be under consideration for publication while being evaluated for this Journal.
Important Dates
• Submission Deadline: March 16, 2024
• Notification : April 13, 2024
• Final Manuscript Due : April 20, 2024
• Publication Date : Determined by the Editor-in-Chief
Contact Us
Here's where you can reach us
ijwestjournal@yahoo.com or ijwestjournal@airccse.org or ijwest@aircconline.com
Submission URL : https://airccse.com/submissioncs/home.html
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Rainfall intensity duration frequency curve statistical analysis and modeling...bijceesjournal
Using data from 41 years in Patna’ India’ the study’s goal is to analyze the trends of how often it rains on a weekly, seasonal, and annual basis (1981−2020). First, utilizing the intensity-duration-frequency (IDF) curve and the relationship by statistically analyzing rainfall’ the historical rainfall data set for Patna’ India’ during a 41 year period (1981−2020), was evaluated for its quality. Changes in the hydrologic cycle as a result of increased greenhouse gas emissions are expected to induce variations in the intensity, length, and frequency of precipitation events. One strategy to lessen vulnerability is to quantify probable changes and adapt to them. Techniques such as log-normal, normal, and Gumbel are used (EV-I). Distributions were created with durations of 1, 2, 3, 6, and 24 h and return times of 2, 5, 10, 25, and 100 years. There were also mathematical correlations discovered between rainfall and recurrence interval.
Findings: Based on findings, the Gumbel approach produced the highest intensity values, whereas the other approaches produced values that were close to each other. The data indicates that 461.9 mm of rain fell during the monsoon season’s 301st week. However, it was found that the 29th week had the greatest average rainfall, 92.6 mm. With 952.6 mm on average, the monsoon season saw the highest rainfall. Calculations revealed that the yearly rainfall averaged 1171.1 mm. Using Weibull’s method, the study was subsequently expanded to examine rainfall distribution at different recurrence intervals of 2, 5, 10, and 25 years. Rainfall and recurrence interval mathematical correlations were also developed. Further regression analysis revealed that short wave irrigation, wind direction, wind speed, pressure, relative humidity, and temperature all had a substantial influence on rainfall.
Originality and value: The results of the rainfall IDF curves can provide useful information to policymakers in making appropriate decisions in managing and minimizing floods in the study area.
Design and optimization of ion propulsion dronebjmsejournal
Electric propulsion technology is widely used in many kinds of vehicles in recent years, and aircrafts are no exception. Technically, UAVs are electrically propelled but tend to produce a significant amount of noise and vibrations. Ion propulsion technology for drones is a potential solution to this problem. Ion propulsion technology is proven to be feasible in the earth’s atmosphere. The study presented in this article shows the design of EHD thrusters and power supply for ion propulsion drones along with performance optimization of high-voltage power supply for endurance in earth’s atmosphere.
Generative AI leverages algorithms to create various forms of content
An Approach to Owl Concept Extraction and Integration Across Multiple Ontologies
1. International Journal of Web & Semantic Technology (IJWesT) Vol.3, No.3, July 2012
DOI : 10.5121/ijwest.2012.3303 33
AN APPROACH TO OWL CONCEPT EXTRACTION
AND INTEGRATION ACROSS MULTIPLE
ONTOLOGIES
Nadia Imdadi1
and Dr. S.A.M. Rizvi2
Department of Computer Science,
Jamia Millia Islamia A Central University, New Delhi, India
1
nadia.imdadi@gmail.com 2
samsam_rizvi@yahoo.com
ABSTRACT
Increase in number of ontologies on Semantic Web and endorsement of OWL as language of discourse for
the Semantic Web has lead to a scenario where research efforts in the field of ontology engineering may be
applied for making the process of ontology development through reuse a viable option for ontology
developers. The advantages are twofold as when existing ontological artefacts from the Semantic Web are
reused, semantic heterogeneity is reduced and help in interoperability which is the essence of Semantic
Web. From the perspective of ontology development advantages of reuse are in terms of cutting down on
cost as well as development life as ontology engineering requires expert domain skills and is time taking
process. We have devised a framework to address challenges associated with reusing ontologies from the
Semantic Web. In this paper we present methods adopted for extraction and integration of concepts across
multiple ontologies. We have based extraction method on features of OWL language constructs and context
to extract concepts and for integration a relative semantic similarity measure is devised. We also present
here guidelines for evaluation of ontology constructed. The proposed methods have been applied on
concepts from food ontology and evaluation has been done on concepts from domain of academics using
Golden Ontology Evaluation Method with satisfactory outcomes.
KEYWORDS
Ontology Engineering, Ontology Creation,, OWL Concepts, Golden Ontology Evaluation Method
1. INTRODUCTION
Ontologies are conceptual representation of domains in a formal language that make data machine
processable over the web. They are key elements that allow knowledge to be represented in a
structured way so that a higher degree of interoperability amongst the various heterogeneous
resources on the web may be achieved. They are the hinges upon which Semantic Web is built
upon. A key factor for the success of semantic web is availability of technologies for the efficient
and effective reuse of ontological knowledge.
Ontology engineering, the process of building ontology, is a time consuming activity which also
requires domain specific skills generally given by experts of a particular field. The approach to
ontology development can be broadly categorized into two areas one where creation is done from
scratch and another through reuse, which generally is in the form of merging, integration,
alignment, mapping or translation. The former form of development is painstaking while the latter
makes use of already developed formal domain representations and though it requires a diligent
attention it definitely cuts down on the time of development.
2. International Journal of Web & Semantic Technology (IJWesT) Vol.3, No.3, July 2012
34
With standardization and maturity of semantic web languages that support description logics,
ontologies on the web have mushroomed and are on the rise. The availability of these semantic
resources augur well as they help to achieve the idea of the semantic web and form the necessary
infrastructure where software agents can make decisions by inferring knowledge from a variety of
resources. Reuse of existing ontological knowledge on the web to build ontologies for the
semantic web may be helped by the efforts in the field of ontology engineering and vice versa.
This research is continuation of our previous work [1-4] where foundations for global framework
for automatic semantic integration incorporating semantic repositories are presented based on
[5][6] in which key stages to ontology development through reuse were identified namely:
ontology discovery, selection, integration and evaluation and possible approaches were
elaborated. Two paradigms have guided the formulation of strategies to address various issues at
each of the stage of ontology construction and are i) principle of modular approach and ii) a
suitable mix of human and computational skills.
2. GLOBAL FRAMEWORK FOR AUTOMATIC SEMANTIC INTEGRATION
INCORPORATING SEMANTIC REPOSITORIES
2.1. Introduction
In [1] [2] framework was forwarded with a vision of an environment which would encompass
ontology development from locally available ontologies as well as those available online that is
on the semantic web. In global context the key components of the framework are query handler,
semantic kernel, and the global knowledge base.
This kernel/processor is the backbone of the framework and is first of its kind as its aim is to
facilitate the usage of semantic repositories scattered across the semantic web by retrieving
resources related in context. Important functionality of kernel is execution of global query service
routine (GQSR) for discovery of online resources. The kernel receives user input and after query
processing initiates a global service routine to search and retrieve relevant information from the
Swoogle’s [7] index of semantic web documents on the web.
[3][4] Bottom up approach to ontology construction is employed as any domain consists of
concepts, which in turn are collection of few terms, properties and relations amongst these terms.
Thus based on these terms and properties an input matrix is created that is then used for concept
extraction from knowledge resources which are globally present. The bottom up approach is
suitable as concepts may be present across multiple online semantic repositories and therefore
these will have to be identified, extracted and integrated using appropriate strategy. In the
following sections we discuss how each of the stages identified in process of ontology
development are addressed by the framework.
2.2. Discovery of Ontologies
In [4] the methodology to discover ontologies on the web is deliberated. A novel modular
approach which is realised via input formulation is adopted for discovery of ontologies. Important
aspects taken into consideration are the issues of word disambiguation and context identification.
This framework provides solution in form of GQSR module used for querying the semantic web
for discovery of ontologies. The modular approach is implemented through input formulation
where input is modelled to accommodate issues for identification of same sense words by using
word sense disambiguation technique and context identification by careful selection of words
which when appear together represent a concept. Three premises are stated that serve as guide
during input modelling: Premise 1- A concept may be identified when a couple of words/terms
appear together; Premise 2- a word may have more than one sense an attribute or property
3. International Journal of Web & Semantic Technology (IJWesT) Vol.3, No.3, July 2012
35
associated with it can be used to identify its context; Premise 3- In case of structured information
like that of namespaces/ontologies, context can be identified as super class (parent-of) and sub
class.
GQSR module is implemented in java and uses Swoogle’s Web Service API [8] to access
ontologies on the Semantic Web. The module makes use of hash bucket algorithm which is
customized for selection of potential ontologies which are input to the next stage. The thus
discovered ontologies are retrieved and ranked based on aggregation function which essentially is
summation of weighted concepts across all the inputs given by the user. For implementation
detail and results please refer [4].
3. AN APPROACH FOR OWL CONCEPT EXTRACTION AND INTEGRATION
ACROSS MULTIPLE ONTOLOGIES
3.1. Extraction Methodology
3.1.1. Features of OWL
This framework works with OWL ontologies as it is the official web ontology language endorsed
by the W3C and satisfies all the requirements of a web ontology language that should consist of
constructs that support the following [9]
• the important concepts (classes) of a domain
• important relationships between these concepts, which can be hierarchical (subclass
relationships), and other predefined relationships contained in the ontology language, or
user defined (properties)
• further constraints on what can be expressed (e.g. domain and range restrictions,
cardinality constraints etc.)
Based on these parameters we present in form of Figure 1 where key constructs of OWL language
[10] are identified which play significant role in defining a class/concept.
3.1.2. Class Related Significant OWL Constructs
Each block in the figure represents some aspect of a particular class. The first block has predicate
Class as a prime predicate three predicates that define the environment in which the class exists.
A class is extended using DatatypeProperty and the ObjectProperty attributes. These are defined
by using some predicates as indicated in their associated blocks and these help to put certain
restrictions on the relations between instances of two classes as well as then number of elements
that may participate in some relationship. DatayypeProperty of a class basically describes the
features/attributes of that class. On the other hand the ObjectProperty helps to so the association
between members of two classes. In order to learn about any class using our framework we
identify these constructs to be retrieved or aggregated across ontologies.
4. International Journal of Web & Semantic Technology (IJWesT) Vol.3, No.3, July 2012
36
Figure 1 Basic Class Building Predicates and their Associations
Each Class/Entity can be defined as composition of {E,Op,Dp,S}, where
{E: Set of SuperClasses and Set of SubClasses, Op: Set of ObjectProperties and SubProperties,
Dp: Set of attributes, S: Set containing classes which are associated with the given class as a
domain or range through object property or datatype property}
3.1.3. Nature of Representation of Domain Knowledge
Examining the ontologies retrieved using our discovery and selection method [4] threw light of
the use of natural language to construct or build these ontologies. It is common practice that a
concept or class is represented by joining of two terms for example to use the term base in
context of pizza is represented as PizzaBase or Pizza-Base. In fact this type of naming convention
is promoted by the ontology development environment Protégé, which has feature to
automatically attach a term with other when defining hierarchy of classes. OWL documentation
[11] recommends that all class names should start with a capital letter and should not contain
spaces.
3.1.4. Method of Extraction
With background on the nature of OWL ontologies based on formal language constructs and
natural language constructs we have devised the extraction technique. Three things are considered
during the extraction process and are:
- only class information identified in section 3.1.2. are retrieved in relation to a class
- those classes that are represented using one or more co-joined terms from the user
defined key terms are extracted
- classes having two terms of which at least one belongs to the key term list are extracted
For example if the user defined term list comprises of following {red, white, wine}, then sample
classes RedWine, WhiteWine, Wine, GrapeWine, RedGrapeWine will be returned.
The reasons for these restrictions are that ontologies vary in size some may have as may as
hundred concepts defined whereas another may have more than thousands of concepts defined.
Since our approach is for ontology development using a modular approach these restrictions help
to draw a line and identify potential class candidates. Another benefit that will result from this
approach is that computational time and memory requirements are also reduced.
The extractors are implemented using OWL API [12]. The extractor module retrieves class
definition which includes the following: SuperClasses, SubClasses, DisjointWith Classes and
Properties associated with the classes satisfying the above criteria.
5. International Journal of Web & Semantic Technology (IJWesT) Vol.3, No.3, July 2012
37
3.2. Integration Methodology
Integration is an important aspect as it helps in assimilating same classes across ontologies. For
example X is defined in ontology A with n features and X is also defined in ontology B with m
features. Then to avoid representation of X twice i.e. if they represent the same thing then they
should be combined through UNION of their features, some similarity measure is needed.
Another aspect to consider is the fact that a class may have same class name in two ontologies but
have entirely different set of features, then a UNION could be misleading as these classes may
represent entirely different concepts. This advocates to device a similarity measure which
considers both the similarity of features against the dissimilarity of features.
3.2.1 Similarity Verses Dissimilarity
Since a class may have more features than the other there is a need to normalize this difference
and define similarity or dissimilarity based on it. Classes may have different number of
occurrences of a predicate and this difference need to be accounted for. If classes with similar
predicates and different count differ in most cases then it may be concluded that the classes are
dissimilar or they represent different aspects of one thing whereas if similar predicates with
different count are similar in most instances then the likelihood that they represent the same thing
increases.
Let us consider that PersonnelInformation is defined in two ontologies as following
Example 1
Ontology 1 : PersonnelInformation
Name Age SSN Address PhoneNo.
Ontology 2: PersonnelInformation
Name Age SSN
Example 2
Ontology A – Book
AuthorName Title Publisher ISSN
Ontology B- Book
Ticket Status Date
Table 1 Similarity Vs Dissimilarity Examples
In Example 1 Table 1 it can be seen that Ontology 1 describes PersonnelInformation more
elaborately than is Ontology 2 and all the features of PersonnelInformation defined in Ontology 2
are same as the definition in Ontology 1. Thus integration of the two ontologies results in one
class and should be represented by Ontology 1.
In general we believe that when any class is defined in an ontology the most basic elements are
associated with it, as seen in the example name and age are basic features that will have to be
defined when for class employee. Through the above example we will also like to highlight the
fact that ontology may have a fuller description or definition of an entity class while in other
ontology only basic essentials are defined for a class. Now, we take another example where the
differences between two classes exist even though class names are same.
Now considering Example 2 in Table 1 class Book in Ontology A represents education domain
while it can be deduced that its namesake in Ontology B represents a concept from travel domain.
It can therefore be said that these two classes represent different concepts and therefore should be
treated as separate entities.
The above examples emphasis the need to consider similarity versus dissimilarity based on class
definitions when performing integration of two classes. And therefore we conclude that when
6. International Journal of Web & Semantic Technology (IJWesT) Vol.3, No.3, July 2012
38
defining a similarity measure for two classes one has to take account of relative closeness of two
classes before judging them to be representing same concept or different.
Existing approaches to compute semantic similarity between concepts of two classes have been
forwarded [13][14][15] for operations such as mapping, aligning, and integrating, but these do not
consider the relative similarity of one class to another which we consider important owing to fact
that ontologies are developed with user specific requirements and same domain ontologies can be
defined more elaborately by one group while not so by another.
3.2.2 Relative Semantic Similarity Measure (RSSM)
Since the parameters we consider are in form of sets of features the problem is to define a
measure which would consider how much of one is contained in another. To device a similarity
measure which takes similarity verses dissimilarity into account we have considered set based
similarity measure the Jaccard Index also known as Jaccard Similarity Coefficient [16] which is a
statistic used for comparing the similarity and diversity of sample sets. It is defined as the size of
the intersection divided by the size of the union of the sample sets and is given by the following
formula:
Dissimilarity between sample sets is known as the Jaccard distance which is complementary to
the Jaccard coefficient and is obtained by subtracting the Jaccard coefficient from 1, or,
equivalently, by dividing the difference of the sizes of the union and the intersection of two sets
by the size of the union:
Jaccard Index and Jaccard distance are two measures which show similarity and dissimilarity but
do not measure to which degree one set of features is contained in the other and vice versa. In
other words this similarity measure computes similarities between two sets based on presence and
absence of feature, but these again give a normalized similarity measure which does not reflect
the relative similarity. We believe that there is likelihood that a set of feature of one class may be
present with a greater degree in set of feature of another class, but vice versa may not be true and
so it can be said that the classes in consideration represent the same thing.
Based on the upper considerations we propose method which computes relative similarity of one
set of features of a class against set of features of second class and vice versa. In this way it can
be found which class is more similar to another and to which degree. For example if C1 and C2 are
two classes then we compute relative similarity using the following formulas
R(C1,C2) = (C1 ∩ C2)/ C1 ……..1
And,
R(C2,C1) = (C1 ∩ C2)/ C2…….. 2
(1) and (2) indicate degree of closeness of one class is to another, say for Classes X found in two
ontologies computing (1) gives a value of 1 and (2) gives a value of 0.5, we can deduce all the
features present in C1 are also present in C2, whereas only 50 percent of the features of C2 are
reflected in C1. Therefore, it may be concluded that C1 and C2 represent same concepts.
7. International Journal of Web & Semantic Technology (IJWesT) Vol.3, No.3, July 2012
39
For example let us consider the two classes as listed in Table 2 have been extracted from different
ontologies:
Table 2 Classes Extracted from Two Different Ontologies
Class Name 1st
Ontology 2nd
Ontology
Fisheggs UNPC
SuperClass
Other-animal-products
Taprdf
SuperClass
Egg
SubClass
Cavior
FishTopping DisjointWith
CheeseTopping, FruitTopping,
HerbSpiceTopping, MeatTopping
NutTopping;SauceTopping
,VegetableTopping
SubClass
AnchoviesTopping, MixedSeafoodTopping
PrawnsTopping
SuperClass
PizzaTopping
Property
hasSpiciness, Mild
DisjointWith
DairyTopping,FruitTopping
HerbSpiceTopping
MeatTopping,NutTopping
SauceTopping
VegetableTopping
SubClasses
AnchoviesTopping
MixedSeafoodTopping
PrawnsTopping
SuperClasses
PizzaTopping
For the two classes we can identify the following
C1 = { E1,Op1,Dp1,S1 } and C2 = { E2,Op2,Dp2,S2 }
Since elements of the sets under consideration are strings some string matching and mechanism is
required. Firstly, we identify the cases when this measure will be applicable.
Case 1: Measure is computed when two namesake classes exist in different ontologies are found.
Plural variant of classes will have to be considered for example in case FishEgg and FishEgg this
measure should be applied. For such cases we use the Levenshtein-Algorithm. [17] [18]
Levenshtein algorithm is also called Edit-Distance which calculates the least number of edit
operations that are necessary to modify one string to obtain another string.
Case 2: RSSM is computed for classes which are synonyms of each other.
Therefore, we compute the relative similarity only when two class labels are within a Levenshtein
Edit Distance (LED) of not more than 1 and with similarity (SIM) >= 0.5 or if the classes are
synonyms.
For the class FishEggs found in two ontologies the computation is done as following
C1 = {Name = FishEggs,
E1 = SuperClass: other-animal-products}
Here |E1| = 3, - other, animal and products are treated as three words defining SuperClass of
FishEggs. And since there is no SubClass for the class the SubClass parameter is set to 0.
C2 = { Name= FishEggs,
E2 = SuperClass: Eggs and SubClass: Cavior}
The parameters under consideration are matches between the super classes of the two classes as
other features are present in one and absent in the other it is assumed that one is a more elaborate
definition than the other.
8. International Journal of Web & Semantic Technology (IJWesT) Vol.3, No.3, July 2012
40
Relative similarity of C1 to C2
R(C1,C2) (E1∩ E2)/ | E1| = (0)/ (3) = 0
R(C2,C1) (E1∩ E2)/ | E2| = (0)/ 2 = 0
Therefore, for the class FishEggs there is only label match but no feature match and therefore,
this class should be treated as separate classes and should be left for the ontology editor to decide
on which one to accept.
Now, computing semantic similarity for FishTopping,
C1 = {Name = FishTopping,
E1 = SuperClass: PizzaTopping; SubClasses: AnchoviesTopping
MixedSeafoodTopping,PrawnsTopping; DisjointWith: CheeseTopping
FruitTopping, HerbSpiceTopping, MeatTopping, NutTopping,
SauceTopping,VegetableTopping
Op1 = hasSpicines
S1 = Mild}
C2 = {Name = FishTopping,
E2 = SuperClass: PizzaTopping; SubClasses: AnchoviesTopping
MixedSeafoodTopping,PrawnsTopping; DisjointWith: DairyTopping
FruitTopping, HerbSpiceTopping, MeatTopping, NutTopping,
SauceTopping,VegetableTopping}
Taking a count of elements in the respective definitions of the classes we get,
Name = 1, Name = 1
E1 = 1+ 3 +7 = 11 and E2 = 1 + 3 +7 = 11 ( |SuperClasses|,|SubClasses|,|DisjointWith|)
Other features are present in one but not in the other class definition they are not used in
computing relative similarity of the two classes.
Now, we compute C1’s relative similarity to C2
R(C1,C2) = (1 + 3 + 6)/ 11= 0.90
R(C2,C1) = (1+3+6)/ 11 = 0.90
We can see that both the relative similarities computed are same and relatively high and therefore
the classes can be merged into a single class.
The challenge here is to decide on the threshold value upon which to render two classes similar or
dissimilar. Based on the above examples and the results of obtained from the chosen domain of
food ontology, given in following table, we consider the following threshold as suitable.
Relative similarity of Class A and Class B is computed when either they are synonyms or
when
LED (A, B) <=1 and SIM >= 0.5
Classes are considered similar and are integrated if
For, R (C1, C2) = α & R (C2, C1) = β, ……… 3
where (α > 0.25 and β > 0.5) or (α > 0.5 and β > 0.25), else the classes are considered as
dissimilar and therefore represented as separate entities, and left for user to decide on which
definition to retain/discard.
9. International Journal of Web & Semantic Technology (IJWesT) Vol.3, No.3, July 2012
41
3.2.3. Adjacency Matrix – Intuitive Method to Display Relationship amongst
Learned Classes
We believe that ontology editor should be given an intuitive environment for visualizing the
relationships namely SuperClassOf, SubClassOf, EquivalentClass, DisjointWith, Domain, Range,
by displaying these in form of adjacency matrix. This type of visualization has not been done in
ontology editor environments, which mostly rely on hierarchical or node link types of
representations.
The advantages of this approach are i) they give an unscrambled interface which results when
there are multiple edge crossings amongst the classes as can be seen in Figure and ii) the user can
quickly locate a class and find the type of relationship a class has with other classes by a simple
scroll.
The genesis of this idea came from recent works in field of visualization tools[19][20] for
semantic web that make use of adjacency matrix to visualise huge RDF Graphs with the focus to
visualize large instance sets and the relations that connect them.
Adjacency matrix is a data structure used for depicting edges between two nodes of a graph. In
this framework weighted (represented by different colours) adjacency matrix is used where a
colour indicates the type of relationship that exists between two nodes. Psuedocode for the
integration methodology is as following:
Pseudocode- Automatic Semantic Integration Incorporating Semantic Repositories
Input- OWL NAMESPACES/ RDF GRAPHS
Output – ADJACENCY MATRIX and ENTITY LIST/CLASS DICTIONARY
START
Input: Set of all Candidate Namespaces, and Keyword Dictionary (KD)
Start Loop:
Select a namespace N
Loop through words in Keyword Dictionary (KD) and search in N if present
- Add to Intermediate Adjacency Matrix (IAM)
- If match, retrieve entity data and store in Class Dictionary (CD)
Loop till all namespaces have been processed
End Loop.
Loop: Retrieve Relationships
For all the classes/entities, say represented by set C found in N retrieve the relationship they
have with each other. Update IAM accordingly.
End Loop: Retrieve Relationships
Output:
- IAM
- CD
End Start Loop: All Namespaces N
/* Once all the candidate namespaces have been processed, next step is to integrate them to
learn:
- complete definition of an Entity
- learn relationships between Entities
*/
All IAMS Start Loop:
Select first IAM and CD
If first IAM then,
10. International Journal of Web & Semantic Technology (IJWesT) Vol.3, No.3, July 2012
42
Add the entities to Final Adjacency Matrix (FAM) from CD also retain the relationships for
these entities from IAM.
If not first IAM then,
Start Process: List CD
Compare entities in CD to Final Class Dictionary (FCD) to
Compute RSSM if Synonyms/or LED <= 1 and SIM>=0.5,
- if RSM satisfy thresholds, perform UNION of attributes and update FCD, and no change in
FAM accept for relationship update ie if E1 in CD is similar to E2 then the relationship that
exist with other E1 and other CD objects will have to be added in FAM for E2.
- if not similar then add the entity + entity data to FCD and add entity to FAM
Once all entities in CD for present Namespace have been exhausted then, update FAM to
include all relationships that exist between entities, in CD, represented by the IAM for this
Namespace.
End Process CD
Next IAM
End Loop : All IAM
Output:
FAM
FCD
END
3.2.4. Class Dictionary – Representation of Individual Learned Concepts
Class dictionary is another output of this integration approach. It consists of the class definition
that is extracted from the ontologies. Based on LED/Synonym as the case may be relative
semantic similarity is computed for two classes and if found favourable UNION of the two are
stored in the class dictionary.
3.2.5. Extraction and Integration: Food Ontology- An Example
This section contains the results obtained by the application of the extraction methodology on the
namespaces identified [4].
The Final Class Dictionary (FCD) consists of 144 classes learned from the five identified
namespaces. RSSM was computed for 23 pairs of classes part of which is presented in Table 3 (in
part) out of which 19 were found to satisfy the set threshold and thus were integrated. Properties
learnt are presented in Table 4 (in part) and it lists the relations learned from across multiple
ontologies, this is another important aspect related to building ontologies from multiple sources.
An ontology engineer can accept, filter out, or modify these according to the requirement.
Table 3 Relative Semantic Similarity Measures of Learned Classes
Class Name α β
Fisheggs 0 0
FishTopping 0.95 0.95
MeatTopping 0.95 0.95
MixedSeafoodTopping 1 1
NamedPizza 0.33 1
Pizza 0.39 0.52
PizzaBase 1 1
PizzaTopping 0.88 0.88
TomatoTopping 1 0.83
11. International Journal of Web & Semantic Technology (IJWesT) Vol.3, No.3, July 2012
43
Figure 2 is a snapshot of adjacency matrix where part of the key classes learnt about concept
Pizza is presented. A bubble representation of the concepts depicting concept representing Pizza
is illustrated in Figure 3 and it can be seen that adjacency matrix Figure 2 is free from the cross
links that exist in bubble representation and therefore gives user a view that is free of cross links.
While bubble representation is handier for well formed ontology adjacency matrix representation
is more conducive during the designing phase, where one is assessing the type of relationships
that may exist between various classes.
Table 4 Learned Properties, Domain and Range
Extracted Properties Domain Range
hasBase Pizza PizzaBase
hasTopping NamedPizza PizzaTopping
hasSpiciness PizzaTopping Mild
hasBody Wine Full, Medium
hasSugar Wine OffDry, Sweet
hasMaker Wine Winery
hasFlavor Wine Moderate, Strong
C
L
A
S
S
E
S
Burgundy
DessertWine
DryRedWine
DryWhiteWine
DryWine
FishTopping
Fruits
FruitTopping
Grapefruit
Grapes
MeatTopping
MeatyPizza
MixedSeafoodTopping
PastaSauce
Pizza
PizzaBase
PizzaTopping
RedWine
Sauce
SauceTopping
SweetWine
TomatoTopping
WhiteWine
Wine
Burgundy
DessertWine
DryRedWine
DryWhiteWine
DryWine
FishTopping
Fruits
FruitTopping
Grapefruit
Grapes
MeatTopping
MeatyPizza
MixedSeafoodTopping
PastaSauce
Pizza
PizzaBase
PizzaTopping
RedWine
Sauce
SauceTopping
SweetWine
TomatoTopping
WhiteWine
Wine
---SuperClassOf
--- SubClassOf
--- DisjointWith
--- Domain
--- IsDomainOf
Figure 2 Part Adjacency Matrix
12. International Journal of Web & Semantic Technology (IJWesT) Vol.3, No.3, July 2012
44
Figure 3 Part Bubble Diagram Showing Learned Pizza Concept
4. ONTOLOGY EVALUATION
Ontology evaluation is a process concerned with checking to what extent the developed ontology
conforms to the requirements. The task of evaluation becomes easy if one has reference ontology.
In such case Golden Standard Methodology [21] of evaluation can be applied. But, this may not
always be the case and therefore we suggest that assessment by humans should be the appropriate
approach as then all the levels of evaluation: lexical, vocabulary, concept and data;
hierarchy/taxonomy; other semantic relations; context application; syntactic; architecture and
design as proposed by [21] would be covered.
5. EVALUATION OF FRAMEWORK
We validate our approach as evaluation of results obtained for building an ontology using this
framework can still be performed using Golden Standard Method [21]. Using the golden standard
gives us flexibility
- To evaluate framework in a general setup as reference ontology can be from any
domain.
- Other advantage is to see to what degree our approach is able to extract correct
concepts; classes and semantic relationships based on some existing ontology.
5.1. GOLDEN STANDARD METHOD OF EVALUATION
Golden Standard Method is evaluated at four levels namely: level 1- lexical, vocabulary, concept,
and data; level 2- hierarchy, taxonomy; level 3- other semantic relations; level 4- syntactic level.
In order to perform Golden Standard Method we select a reference ontology which has been
developed by domain experts and exists on the web. The reference ontology selected represents
concepts from university.
Level 1 Lexical, vocabulary, concept, data
13. International Journal of Web & Semantic Technology (IJWesT) Vol.3, No.3, July 2012
45
Table 5 Concept Matrix
Concept- Publication
Publication, Article, Book, Conference, Journal
Publication, Technical Report, Workshop Paper
Publication, Journal, Special, Issue, Online
Concept- Person
Person, Employee, Academic, Staff, Administrative
Administrative, Staff, Secretary, Technical, Organization
Concept- Organization
University, Student, PhD, research, group
Organization, Department, Institute, Research Group, University
Concept- Conference
Activity, Event, Conference, Meeting, Workshop
Level 2 Hierarchy, taxonomy under consideration is presented in Table 6
Table 6 Hierarchy/Taxonomy
Publication (SuperClass)
---Article (Class)
---ArticleInBook (SubClass)
---ConferencePaper (SubClass)
---JournalArticle (SubClass)
---TechnicalReport (SubClass)
---WorkshopPaper (SubClass)
---Book (Class)
---Journal (Class)
---SpecialIssuePublication (SubClass)
---OnlinePublication (Class)
Organization (SuperClass)
---Department (SubClass)
---Institute (SubClass)
---ResearchGroup (SubClass)
---University (SubClass)
Person (SuperClass)
---Employee (Class)
---AcademicStaff (SubClass)
---Lecturer (SubClass)
---Researcher (SubClass)
---PhDStudent (SubClass)
---AdministrativeStaff (SubClass)
---Secretary (SubClass)
---TechnicalStaff (SubClass)
---Student (SubClass)
---PhDStudent (SubClass)
Event (SuperClass)
---Activity (SubClass)
---Conference (SubClass)
---Meeting (SubClass)
---Workshop (SubClass)
Level 3 Other semantic relations define constraints on Class
Consider Class Article Table 7, from reference ontology keyword associated with it can only be
string; author for article can only be from class person. Sample Semantic Relations for class
Article as defined in reference ontology:
14. International Journal of Web & Semantic Technology (IJWesT) Vol.3, No.3, July 2012
46
Table 7 Reference Class Name Article description
Reference
Ontology
Lexical/
Vocabulary
Other Semantic Relations Syntactic
Ka.Owl Article keyword only string; author only person;
title only string; online version only;
OnlinePublication class; year only integer;
abstract only string
OWL
Level 4 Syntactic – Owl Description
5.2. Evaluation
We applied our method on the concepts identified at level one from the reference ontology. Table
4 shows concepts/ input to the system.
Table 8 gives the aggregated ranked namespaces:
Table 8 Aggregated Ranked Ontologies
Namespace Alias Name Weight
http://annotation.semanticweb.org/iswc/iswc.owl Annotation 2.6
http://morpheus.cs.umbc.edu/aks1/ontosem.owl Morpheus 4.2
http://purl.oclc.org/NET/nknouf/ns/bibtex Bitex 1
http://swrc.ontoware.org/ontology SWRC 6.2
http://www.aktors.org/ontology/portal Aktors 6.4
The 4th
namespace, alias name SWRC, in the above gives high score implying that it has good
concept coverage. This is so as it is another version of the reference ontology and therefore it is
not considered in the later stages of the evaluation of the framework to remove any biases.
Another aspect that we highlight here is that the 5th
namespace in Table 8 is a huge ontology with
many classes defined and perhaps therefore gives greater concept coverage with the maximum
aggregate of 6.4.
The next stage in framework is to extraction of concepts. A total of 297 classes are identified as
potential classes by the method proposed in the extraction. But since we have reference ontology
exact classes to look for are known and therefore list of classes can be reduced to exact matches
from the reference ontologies. The reduced list of potential classes (24) is shown in Table 9.
Table 9 List of Potential Classes
Academic, Activity, Article, Book, Booklet, Conference, Department, Employee, Event,
InBook, Institute, Journal, Meeting, Organization, Person, PhdThesis, Publication, Research,
Researcher, Student, Secretary, TechReport, Workshop, University
5.2.1 Result Analysis
The reduced list of classes have exact lexical equivalent of classes from the reference ontology.
And therefore it can be seen that from out of 28 keyword lists from which we had formed the
concepts 24 have been identified by this framework from across multiple ontologies. Thus level 1
of the Golden Standard Method is accomplished and it may be concluded
15. International Journal of Web & Semantic Technology (IJWesT) Vol.3, No.3, July 2012
47
- This framework successfully retrieved relevant namespace, as it not only identified
namespace which is version of the reference ontology, but gives it a highest
aggregate only second to an ontology which has many concepts.
- This framework is able to identify 24 classes out of 28 key terms that form the
concepts from across multiple ontologies.
Golden Standard Method of ontology evaluation has been explored for the evaluation of learned
ontologies against reference ontology and [22] has emerged as evaluation method which not only
considers lexical layer but also concept hierarchies of learned and the reference ontology during
the evaluation process.
Now Level 2 of Golden Standard Method comprises of checking for taxonomic similarities
between the learned concepts with ones in the reference ontology. In order to perform Level-2 we
have used OnteEval Tool [23] which is implementation of [22] on individual namespaces that
were retrieved in the first stage of the framework execution. The output of running the algorithm
on the each namespace along with reference ontology is set of concepts that are found similar in
both. Table 10 gives the result.
Table 10 Result of OnteEval Tool
Class
Namespace
Bibtex Aktors Morpheus Annotation
Article x x
Book x x x x
Conference x x x
Department x
Employee x x
Event x x x
Institute x x
Journal x
Meeting x
Organization x x x
Person x x x
Publication x x
Researcher x x
Secretary x x x
Student x x x
University x x x
Workshop x x x
It can be seen that concept hierarchy for as many as 17 classes across the ontologies are found to
be similar to the ones in reference ontology, thereby leading to conclusion that their integration is
plausible.
The results of level-2 underline two aspects, which are favourable in suggesting that the
framework will lead to correct concept formation and are:
- Our method is able to identify concepts present across multiple ontologies. For
instance class Department is found in namespace Annotation only whereas class
Book is found in all the ontologies Table 10.
16. International Journal of Web & Semantic Technology (IJWesT) Vol.3, No.3, July 2012
48
- Integration will possibly lead to correct concept formulation as each class in the
above table is deemed similar to one in reference ontology by OnteEval tool as well.
Level 3 of the evaluation process is about comparing learned other semantic relations with the
ones in reference ontology. This level has been performed manually on set of classes in Table 10;
we have further verified whether union of attributes that lead to class formulation based on the
RSSM proposed in this framework leads to correct learning of other semantic relations which are
defined on a class in the reference ontology. It can be seen from Table 11 (in part) not all classes
found similar by OnteEval tool across ontologies satisfy the RSSM defined by the framework.
But comparison of the ones that satisfying the criteria as shown in Table 12 (in part), depict
correct learning of lexical, hierarchical and other semantic relations (represented by italic).
Table 11 Relative Semantic Similarity Measures for Learned Classes
Class Namespaces RSSM R(C1,C2),R(C2,C1) Merged
Article Morpheus/Bibtex 0,0 No
Book Aktors/Morpheus 0,0 No
Book Aktors/Annotation 0.38,0.36 No
Book Aktors/Bibtex 0.16,0.20 No
Conference Aktors/Annotation 0.5,0.25 Yes
Department Annotation - -
Employee Aktors/Annotation 1,0.35 Yes
Event Aktors/Morpeus 0,0 No
Event Aktors/Annotation 0.5,1 Yes
Table 12 Actual Class and Those Resulting from Merging based on Relative Semantic Similarity
Measures
Class Namespace Neighbourhood Semantic Relations
Conference Reference SuperClass:
Event
DisjointWith:
Activity; Meeting;
SpecialIssueEvent;
Workshop
Number only string
Series only string
Location only string
atEvent only Event
publication only
Publication
hasParts only Event
orgCommittee only Person
date only string
eventTitle only string
keyword only string
Learnt Class
After
Integration
SuperClasses learnt:
Meeting taking place
Event;
DisjointWith learnt:
Workshop
EventProduct only
Publication
Location as string
Date as string
EventTitle as string
Employee Reference SuperClass: Person
SubClass:
Academic staff
Administrative staff
DisjointWith:
Student
Address only string
fax only string
photo only string
email only string
lastName only string
name only string
middleInitial only string
17. International Journal of Web & Semantic Technology (IJWesT) Vol.3, No.3, July 2012
49
phone only string
firstName only string
keyword only string
Learnt Class
After
Integration
SuperClass: Person
SubClass:
Educational Support Staff
Secretary System-
Administrator, Graphic
Designer, Multimedia,
DisjointWith:
Faculty Member
Researcher, Student
Email string
Firstname string
Phone as string
name only string
Middle initial only string
Lastname string
Photo string
Fax string
Researchtopics topic
Homepage string
Has_affiliaiton only
organization
Address s
Involvedin project only
project
Event Reference SuperClass: Object
SubClasses:
Activity,Conference
Workshop,
SpecialIssueEvent
Meeting
atEvent only event
date only string
eventTitle only string
hasParts only Event
location only string
orgCommittee only person
publication only
publication
keyword only string
Learnt Class
After
Integration
SuperClasses: Thing-
temporal thing
SubClass:
Conference; Workshop;
Tutorial;
Date string
eventTitle string
location string
Results or conclusions of performing Level 3 are summarized as following:
- The classes merged based on RSM lead to correct learning of a class based on class
definition in the reference ontology.
- The framework was able to uncover other semantic relations which where defined in
other ontologies, and were similar to the ones defined for a class in the reference
ontology.
6. CONCLUSION
In this paper an approach to extraction and integration of concepts/classes across multiple
ontologies is proposed and evaluated on a well formed ontology. Need to look at the concept
similarity computation through the prism of similarity versus dissimilarity of features to address
natural language disambiguation issues where similar names in two different ontologies may
represent same concept or an entirely different one. The problem of concept similarity is reduced
to set/feature based matching, and a Relative Similarity Measure Formula for computation
proposed.
18. International Journal of Web & Semantic Technology (IJWesT) Vol.3, No.3, July 2012
50
A novel way of presenting the learned relationships using adjacency matrix is proposed, which is
a convenient representation for ontology developer during designing phase of ontology.
This framework allows ontology editors to reuse ontologies that exist on the semantic web. It
automates the process of finding relevant ontologies automatically as well as to identify, and
integrate concepts across multiple ontologies in an automatic manner. However, some limitations
of this framework are: it works best for domains defined using natural language, for domains
formed of technical or symbolic representations, like the medical and chemical domains, where
word sense disambiguation techniques are not applicable this approach may not lead to best
results; also availability of ontologies for a domain would impact the results of this framework.
7. FUTURE WORK
A research challenge relevant in context of ontologies is how semantic repositories function over
time in order to take account of their necessary maintenance and deployment. This is a promising
area of research and is termed as ontology evolution. Ontologies are conceptualizations of
domains which are also affected by the changes in the world and therefore there is need for their
evolution to keep them relevant to the model of the world they represent. Some other factors that
cause ontologies to evolve are corrections of design flaws, changing user- and business
requirements, a shift of focus on a domain.
This framework makes use of ontologies that exist in decentralized environment of the web. The
likelihood that changes or evolution of these ontologies will have to be reflected by the ontology
created using this framework can not be ruled out. Therefore, from future perspective the
functionality of the kernel should be expanded to include a module which would take care of any
such needs.
REFERENCES
[1] S.A.M Rizvi & Nadia Imdadi (2008), “Framework for Automatic Semantic Integration of Semantic
Repositories”, International Conference on Semantic e-Business & Enterprise Computing, Kerala,
India.
[2] Nadia Imdadi & S.A.M Rizvi (2010),“Framework for Automatic Reuse of Existing Online Semantic
Resources by Facilitating Concept Extraction Using Word Sense Disambiguation in Computational
Linguistics Techniques”, International Conference on Semantic Web & Web Services, WorldComp,
Nevada, USA.
[3] Nadia Imdadi & S.A.M Rizvi (2010), “Automating Reuse of Semantic Repositories in the Context of
Semantic Web”, International Conference on Semantic e-Business & Enterprise Computing, Springer,
Tamil Nadu, India, pp 518-523 ISBN: 978-3-642-14493-6.
[4] Nadia Imdadi & S.A.M Rizvi (2011), “Using Hash based Bucket Algorithm to Select Online
Ontologies for Ontology Engineering through Reuse”, International Journal of Computer
Applications, 28(7):21-25, August 2011. Published by Foundation of Computer Science, New York,
USA
[5] Harith Alani (2006), “Position paper: ontology construction from online ontologies”, In Proceedings
of the 15th international conference on World Wide Web, ACM, New York, NY, USA, 491-495.
[6] Elena Simperl (2009), “Reusing ontologies on the Semantic Web: A feasibility study”, Data &
Knowledge Engineering Elsevier, 68 905–925.
19. International Journal of Web & Semantic Technology (IJWesT) Vol.3, No.3, July 2012
51
[7] L. Ding, T. Finin, A. Joshi, R. Pan, R. S. Cost, Y. Peng, P. Reddivari, V. C. Doshi, & J. Sachs (2004),
“Swoogle: A semantic web search & metadata engine”, In Proc. 13th ACM Conf. on Information &
Knowledge Management.
[8] Ebiquity Group at UMBC, “Swoogle Web Services”, [Online]. Available:
http://swoogle.umbc.edu/index.php?option=com_swoogle_manual&manual=search_overview
[9] Grigoris Antoniou , Enrico Franconi , Frank Van Harmelen (2005), “Introduction to Semantic Web
Ontology Languages Reasoning Web”, Proceedings of the Summer School(Number 3564 in Lecture
Notes in Computer Science), Malta.
[10] OWL Web Ontology Language Reference (2004), W3C Recommendation 10 February 2004,
http://www.w3.org/TR/owl-ref/.
[11] Matthew H., Simon J., Georgina M., Alan R., Robert S., Chris Wroe (2007), “A Practical Guide To
Building OWL Ontologies Using Prot´eg´e 4 & CO-ODE Tools Edition 1.1”, The University Of
Manchester,http://owl.cs.manchester.ac.uk/tutorials/protegeowltutorial/resources/ProtegeOWLTutoria
lP4_v1_1.pdf .
[12] The OWL API, University of Manchester, http://owlapi.sourceforge.net/
[13] T. Bach, & R. Dieng-Kuntz (2005), “Measuring Similarity of Elements in OWL DL Ontologies”, The
Twentieth National Conference on Artificial Intelligence, AAAI.
[14] Le D. Ngan, Tran M. Hang, & Angela E. S. Goh (2006), “Semantic Similarity between Concepts
from Different OWL Ontologies”, IEEE International Conference on Industrial Informatics.
[15] Xiquan Yang1, Ye Zhang1,2, Na Sun1, Deran Kong1 (2009), “Research on Method of Concept
Similarity Based on Ontology”, Proceedings International Symposium on Web Information Systems
& Applications, pp. 132-135, ISBN 978-952-5726-00-8.
[16] Wikipedia The Free Encyclopedia, Jaccard index, http://en.wikipedia.org/wiki/Jaccard_index
[17] The Levenshtein-Algorithm, http://www.levenshtein.net/index.html
[18] Levenshtein Edit Distance, http://www.miislita.com/searchito/levenshtein-edit-distance.html
[19] Benjamin Bach, Emmanuel Pietriga, Ilaria Liccardi, Gennady Legostaev (2011), “OntoTrix: a hybrid
visualization for populated ontologies”, In Proceedings of the 20th International Conference
Companion on World Wide Web, pp. 177-180, Hyderabad, India.
[20] Benjamin Bach, Emmanuel Pietriga, Ilaria Liccardi Gennady Legostaev (2011), “RDF Visualization
using a Three-Dimensional Adjacency Matrix”, (Inproceedings). 4th International Semantic Search
Workshop, Hyderabad, India ,
[21] Janez Brank , Marko Grobelnik , Dunja Mladenić (2005), “A survey of ontology evaluation
techniques”, In Proceedings of the Conference on Data Mining & Data Warehouses.
[22] Dellschaft, K. & Staab, S. (2006), “On How to Perform a Golden Standard Based Evaluation of
Ontology Learning”, International Semantic Web Conference, pp. 228-241
[23] Christopher Brewster, Jose Iria & Ziqi Zang (2007), “Automating Ontology Learning for the
Semantic Web, OnteEval Tool”, Abraxas Project.