SlideShare una empresa de Scribd logo
1 de 2
Descargar para leer sin conexión
Producing, Publishing and
                                                                                                                                   Consuming Linked Data
                                                                                                                                Three lessons from the Bio2RDF project
                                                       Background                                                                                              François Belleau (francoisbelleau@yahoo.ca)
With the proliferation of new online databases, data integration continues to be one of the major unsolved problems for
bioinformatics. In spite of initiatives like BioPAX [1], Biomart [2], the EBI, KEGG and NCBI integrated web resources, the web                                                                                         Lesson # 1
of bioinformatics databases is still a web of independent data silos.                                                                                                                                  Rdfise data using ETL software like Talend.
Since 2005, the aim of the Bio2RDF project has been to make popular public datasets available in RDF format; the data
description format of the growing Semantic Web. Initially data from OMIM, KEGG, Entrez Gene, along with numerous other
resources, were converted to the RDF semantic format. Currently 38 SPARQL endpoints are made available from the
Bio2RDF server [3].
                                                                                                                                                                                                                                                          This is the workflow producing
Bio2RDF project has been the primary source of bioinformatics data in the Linked Data cloud in 2009. Today many                                                                                                                                   triples from Genbank HTML web page about
organisations have started to publish their datasets or knowledge bases using the RDF/SPARQL standard. GO, Uniprot and                                                                                                                                     external database references.
Reactome were early converts to publishing in RDF. Most recently PDBJ, KEGG, NCBO have started to publish their own
data in the new semantic way. From the data integration perspective projects like BioLOD [4] from the Riken Institute and
Linked Life Data [5] from Ontotext have pushed the Semantic Web model close to production quality service. The linked Data
cloud of bioinformatics is now rapidly growing [6]. The technology incubation phase is over.

One question data provider should ask themselves now is : How costly is it to produce and publish data in RDF
according to this new paradigm ? And, from the bioinformatician data consumer point of view : How useful can semantic                                                                                                                               These are the instructions creating triples
web technologies be to build the data mashups needed to support a specific knowledge discovery tasks and the                                                                                                                                                   from the data flow.
needs of domain experts ?

These are the questions we answer here by proposing methods for producing, publishing and consuming RDF data, and by
sharing the lessons we have learnt while building Bio2RDF.

                                                    Producing RDF
RDF is all about triples, building triples, storing triples and querying triples. A triple is defined by the subject-predicate-object
model. If you have used key-value table before, you already know what triples are. A collection of triples define a graph so
generic that all data can be represented using it. Every kind of data can be converted in triples from all known formats: HTML,
XML, relational database, columns table or key-value representation. Converting data to RDF is so important to build the
Semantic Web that it is expressed by a new verbs : triplify or rdfize ! Building the Bio2RDF rdfizers we had to deal with all                                                                                        Expose data as RDF using dereferencable URI
those kind of data formats and sources.                                                                                                                                                                                   according to design rule #1 and #2


Lesson #1 Transforming data in RDF is an ETL (Extract Transform Load) task and there are
                   now free and professional frameworks available for this purpose.

Talend [7] software is a first class ETL framework, based on Eclipse, generating native java code from a graphical
representation of the data transformation workflow. Using this professional quality software to rdfize data is much more                                                                                         Make a SPARQL endpoint public
productive than writting Java, Perl, PHP scrits as we use to do in the past.                                                                                                                                       so query can be submited.

To build the namespace SPARQL endpoint at Bio2RDF [8], a RDF mashup composed of GO, Uniprot, LSRN, GenBank,                                                                                                    Here is a query used to discoverer
MIRIAM and Bio2RDF namespace description, we generated RDF from XML, HTML, key/value format file, tabular file and                                                                                           the schema of an unknown triplestore.
also an RDF dump. Using Talend ETL framework has made the programming job and quality testing far more efficient.


                                     Publish on the Linked Data web
The inventor of HTML, Tim Berner Lee, has also define the rules by which the Semantic Web should be designed [9]:

 1)   Use URIs as names for things
 2)   Use HTTP URIs so that people can look up those names.
 3)   When someone looks up a URI, provide useful information, using the standards (RDF*, SPARQL)
 4)   Include links to other URIs. so that they can discover more things.

Building Bio2RDF, we have been early adopters of those rules. The DBpedia project, a version of Wikipedia available in RDF
format and through one of the first major public SPARQL endpoints, is at the heart of the Linked Data cloud, it is built using
Virtuoso triplestore [10], a first class software, that is free and open-source.
                                                                                                                                                                                                                                                                 Lesson ##2
                                                                                                                                                                                                                                                                    Lesson 2
                                                                                                                                                                                                                                                          To publish semantic data use
                                                                                                                                                                                                                                                      To publish semanticVirtuoso
                                                                                                                                                                                                                                                            a triplestore like data use
                                                                                                                                                                                                                                                        a triplestore like Virtuoso
Lesson #2           To publish semantic web data chose a good triplestore and made
                    a SPARQL endpoint available publicly on the Internet.                                                                                   Discover concepts using
                                                                                                                                                               type ahead search
Bio2RDF project has also depended on Virtuoso, and benefits from all the innovation in each new version. Virtuoso not only
                                                                                                                                                                                                                                                    Full text search query results with ranking
offers SPARQL endpoint to submit queries based on the W3C standards, full text search and facet browsing-based user                                                                                                                              based on the number of connections in the graph
interface are available so the RDF graph can be browsed, queried, searched and explored with type ahead completion
service. All this from one software product directly out of the box.

Sesame [11], 4Store [12], Mulgara [13] and other new projects emerging each year make publishing data over the web a new
affordable reality.

                                                 Consuming triples
Why should we start using Semantic Web data and technologies ? Because building a database from public resources on the
web is more efficient than the traditional way of creating datawarehouse. The Giant Global Graph (GGG) of the entire
Semantic Web is the new datastore you can build your semantic mashup from with the tools of your choice.

To answer a high level scientific question from data already available in RDF, you need first to build a specific triplestore that
you will eventually be able to query to, and hopefully, will obtain the expected results. Building a specific database just to
answer a specific question, this is what semantic data mashup are about.
                                                                                                                                                                                                                                                          Lesson # 3
Lesson #3           Semantic datasources available from SPARQL endpoint can be consumed in all kind                                                                                                                                           Consume semantic data as you like,
                   of ways to create mashup.                                                                                                                                                                                                   using HTTP GET, SOAP services
                                                                                                                                                                                                                                           or new tool designed to explore RDF data.
For example the following ways of consuming RDF include; (i) SPARQL queries over REST, (ii) dereferenced RDF graph by
                                                                                                                                                        Using soapUI popular tool [16]
URI over HTTP, (iii) SOAP services returning RDF or even better still (iv) the new web services model proposed by SADI
                                                                                                                                                     you can consume Bio2RDF's SOAP
framework [14]. Programming in Java, PHP, Ruby or PERL, using RDF/XML, Turtle or JSON/RDF format is also possible and                              services returning triples in ntriple format.
the needed software get better each year. Its is a wild new world of open technologies you will benefit from and to learn and
use.

The Bio2RDF project first offered an RDF graph that could be dereferenced by a URI in the form
http://bio2rdf.org/omim:602080. Any HTTP GET request will return the RDF version of a document from one of the database
we expose as RDF in the format of your choice. Next, you can submit queries directly to one of our public SPARQL endpoints
like http://namespace.bio2rdf.org/sparql. Programming a script or designing a workflow with software like Taverna or Talend,
you can build your data mashup from the growing semantic web data sources in days, not weeks.

To explore the possibilities offered by a triplestore, discover the Bio2RDF SPARQL endpoint about bioinformatics database at
http://namespace.bio2rdf.org/fct, submit SPARQL queries to its endpoint at http://namespace.bio2rdf.org/sparql
And, if you are a SOAP services user, consume its web services described here
http://namespace.bio2rdf.org/bio2rdf/SOAP/services.wsdl.

                                                        Discussion
Combining data from different sources is the main problem of data integration in bioinformatics. The Semantic Web
community have addressed this problem for years, now the emergent Semantic Web technologies are mature and ready to
be used in production scale systems. The Bio2RDF community think that solving data integration problem in bioinformatics
can be solve by applying existing Semantic Web practices. The bioinformatics community could significantly benefit from what
is being developed now, in fact our community has done a lot to show that Semantic Web model has a great potential in                                                                                                                                        Using the RelFinder tool [15]
solving Life Science problems. By sharing our own Bio2RDF experience and these simple lessons we have learned, we hope                                                                                                                                 it is possible to query RDF graphically
                                                                                                                                                                                                                                                         and visualise the triplestore's graph.
that may be you should give it a try in your next data integration project.


                                                                                                                                                                                                            Acknowledgements
                                                                                                References                                                                                                   ●   Bio2RDF is a community project available at http://bio2rdf.org
                                                                                                                                                                                                             ●   The community can be joined at https://groups.google.com/forum/?fromgroups#!
                                                                                                1) http://www.biopax.org/                    9) http://www.w3.org/DesignIssues/LinkedData.html                   forum/bio2rdf
                                                                                                2) http://www.biomart.org/                   10) http://virtuoso.openlinksw.com/dataspace/dav/wiki/Main/     ●   This work was done under the supervision of Dr Arnaud Droit, assistant professor and
                                                                                                3) http://www.bio2rdf.org/                   11) http://www.openrdf.org/                                         director of the Centre de Biologie Computationnelle du CRCHUQ at Laval
                                                                                                4) http://biolod.org/                        12) http://4store.org/                                              University, where a mirror of Bio2RDF is hosted.
                                                                                                5) http://linkedlifedata.com/                13) http://www.mulgara.org/                                     ●   Michel Dumontier, from the Dumontier Lab at Carleton University, is also hosting
                                                                                                6) http://richard.cyganiak.de/2007/10/lod/   14) http://sadiframework.org                                        Bio2RDF server and actually leads the project
                                                                                                7) http://talend.com/                        15) http://www.visualdataweb.org/relfinder.php                  ●   Thanks to all the people member of the Bio2RDF community, and especially Marc-
                                                                                                8) http://namespace.bio2rdf.org/sparql       16) http://www.soapui.org/                                          Alexandre Nolin and Peter Ansell, initial developers.
Producing, Publishing and Consuming Linked Data Three lessons from the Bio2RDF project

Más contenido relacionado

La actualidad más candente

Pal gov.tutorial2.session1.xml basics and namespaces
Pal gov.tutorial2.session1.xml basics and namespacesPal gov.tutorial2.session1.xml basics and namespaces
Pal gov.tutorial2.session1.xml basics and namespacesMustafa Jarrar
 
Pal gov.tutorial2.session2.xml dtd's
Pal gov.tutorial2.session2.xml dtd'sPal gov.tutorial2.session2.xml dtd's
Pal gov.tutorial2.session2.xml dtd'sMustafa Jarrar
 
Pal gov.tutorial2.session13 1.data schema integration
Pal gov.tutorial2.session13 1.data schema integrationPal gov.tutorial2.session13 1.data schema integration
Pal gov.tutorial2.session13 1.data schema integrationMustafa Jarrar
 
Pal gov.tutorial2.session5 1.rdf_jarrar
Pal gov.tutorial2.session5 1.rdf_jarrarPal gov.tutorial2.session5 1.rdf_jarrar
Pal gov.tutorial2.session5 1.rdf_jarrarMustafa Jarrar
 
Pal gov.tutorial2.session15 1.linkeddata
Pal gov.tutorial2.session15 1.linkeddataPal gov.tutorial2.session15 1.linkeddata
Pal gov.tutorial2.session15 1.linkeddataMustafa Jarrar
 
Pal gov.tutorial2.session12 1.the problem of data integration
Pal gov.tutorial2.session12 1.the problem of data integrationPal gov.tutorial2.session12 1.the problem of data integration
Pal gov.tutorial2.session12 1.the problem of data integrationMustafa Jarrar
 
Pal gov.tutorial2.session12 2.architectural solutions for the integration issues
Pal gov.tutorial2.session12 2.architectural solutions for the integration issuesPal gov.tutorial2.session12 2.architectural solutions for the integration issues
Pal gov.tutorial2.session12 2.architectural solutions for the integration issuesMustafa Jarrar
 
Pal gov.tutorial2.session14.lab rdf-dataintegration
Pal gov.tutorial2.session14.lab rdf-dataintegrationPal gov.tutorial2.session14.lab rdf-dataintegration
Pal gov.tutorial2.session14.lab rdf-dataintegrationMustafa Jarrar
 
Pal gov.tutorial2.session8.lab owl
Pal gov.tutorial2.session8.lab owlPal gov.tutorial2.session8.lab owl
Pal gov.tutorial2.session8.lab owlMustafa Jarrar
 
CMIS and Interoperability - AIIM 2009
CMIS and Interoperability - AIIM 2009CMIS and Interoperability - AIIM 2009
CMIS and Interoperability - AIIM 2009johnnewton
 
B vb script11
B vb script11B vb script11
B vb script11oakhrd
 
Pal gov.tutorial2.session13 2.gav and lav integration
Pal gov.tutorial2.session13 2.gav and lav integrationPal gov.tutorial2.session13 2.gav and lav integration
Pal gov.tutorial2.session13 2.gav and lav integrationMustafa Jarrar
 
Pal gov.tutorial2.session3.xml schemas
Pal gov.tutorial2.session3.xml schemasPal gov.tutorial2.session3.xml schemas
Pal gov.tutorial2.session3.xml schemasMustafa Jarrar
 
Pal gov.tutorial2.session7.owl
Pal gov.tutorial2.session7.owlPal gov.tutorial2.session7.owl
Pal gov.tutorial2.session7.owlMustafa Jarrar
 
Pal gov.tutorial2.session7
Pal gov.tutorial2.session7Pal gov.tutorial2.session7
Pal gov.tutorial2.session7Mustafa Jarrar
 
Pal gov.tutorial2.session4.lab xml document and schemas
Pal gov.tutorial2.session4.lab xml  document and schemasPal gov.tutorial2.session4.lab xml  document and schemas
Pal gov.tutorial2.session4.lab xml document and schemasMustafa Jarrar
 
LODUM talk at ifgi's Spatial @ WWU series
LODUM talk at ifgi's Spatial @ WWU seriesLODUM talk at ifgi's Spatial @ WWU series
LODUM talk at ifgi's Spatial @ WWU seriesCarsten Keßler
 
Applying Semantic Extensions And New Services To Drupal Sem Tech June 2010
Applying Semantic Extensions And New Services To Drupal   Sem Tech June 2010Applying Semantic Extensions And New Services To Drupal   Sem Tech June 2010
Applying Semantic Extensions And New Services To Drupal Sem Tech June 2010AI4BD GmbH
 

La actualidad más candente (20)

Pal gov.tutorial2.session1.xml basics and namespaces
Pal gov.tutorial2.session1.xml basics and namespacesPal gov.tutorial2.session1.xml basics and namespaces
Pal gov.tutorial2.session1.xml basics and namespaces
 
Pal gov.tutorial2.session2.xml dtd's
Pal gov.tutorial2.session2.xml dtd'sPal gov.tutorial2.session2.xml dtd's
Pal gov.tutorial2.session2.xml dtd's
 
Pal gov.tutorial2.session13 1.data schema integration
Pal gov.tutorial2.session13 1.data schema integrationPal gov.tutorial2.session13 1.data schema integration
Pal gov.tutorial2.session13 1.data schema integration
 
Pal gov.tutorial2.session5 1.rdf_jarrar
Pal gov.tutorial2.session5 1.rdf_jarrarPal gov.tutorial2.session5 1.rdf_jarrar
Pal gov.tutorial2.session5 1.rdf_jarrar
 
Pal gov.tutorial2.session15 1.linkeddata
Pal gov.tutorial2.session15 1.linkeddataPal gov.tutorial2.session15 1.linkeddata
Pal gov.tutorial2.session15 1.linkeddata
 
Pal gov.tutorial2.session12 1.the problem of data integration
Pal gov.tutorial2.session12 1.the problem of data integrationPal gov.tutorial2.session12 1.the problem of data integration
Pal gov.tutorial2.session12 1.the problem of data integration
 
Pal gov.tutorial2.session12 2.architectural solutions for the integration issues
Pal gov.tutorial2.session12 2.architectural solutions for the integration issuesPal gov.tutorial2.session12 2.architectural solutions for the integration issues
Pal gov.tutorial2.session12 2.architectural solutions for the integration issues
 
Pal gov.tutorial2.session14.lab rdf-dataintegration
Pal gov.tutorial2.session14.lab rdf-dataintegrationPal gov.tutorial2.session14.lab rdf-dataintegration
Pal gov.tutorial2.session14.lab rdf-dataintegration
 
Pal gov.tutorial2.session8.lab owl
Pal gov.tutorial2.session8.lab owlPal gov.tutorial2.session8.lab owl
Pal gov.tutorial2.session8.lab owl
 
CMIS and Interoperability - AIIM 2009
CMIS and Interoperability - AIIM 2009CMIS and Interoperability - AIIM 2009
CMIS and Interoperability - AIIM 2009
 
B vb script11
B vb script11B vb script11
B vb script11
 
Pal gov.tutorial2.session13 2.gav and lav integration
Pal gov.tutorial2.session13 2.gav and lav integrationPal gov.tutorial2.session13 2.gav and lav integration
Pal gov.tutorial2.session13 2.gav and lav integration
 
Pal gov.tutorial2.session3.xml schemas
Pal gov.tutorial2.session3.xml schemasPal gov.tutorial2.session3.xml schemas
Pal gov.tutorial2.session3.xml schemas
 
Pal gov.tutorial2.session7.owl
Pal gov.tutorial2.session7.owlPal gov.tutorial2.session7.owl
Pal gov.tutorial2.session7.owl
 
Pal gov.tutorial2.session7
Pal gov.tutorial2.session7Pal gov.tutorial2.session7
Pal gov.tutorial2.session7
 
IGP Production Systems For Digital Archives
IGP Production Systems For Digital ArchivesIGP Production Systems For Digital Archives
IGP Production Systems For Digital Archives
 
Pal gov.tutorial2.session4.lab xml document and schemas
Pal gov.tutorial2.session4.lab xml  document and schemasPal gov.tutorial2.session4.lab xml  document and schemas
Pal gov.tutorial2.session4.lab xml document and schemas
 
Free Webinar: LOD2 Stack - 1st release
Free Webinar: LOD2 Stack - 1st releaseFree Webinar: LOD2 Stack - 1st release
Free Webinar: LOD2 Stack - 1st release
 
LODUM talk at ifgi's Spatial @ WWU series
LODUM talk at ifgi's Spatial @ WWU seriesLODUM talk at ifgi's Spatial @ WWU series
LODUM talk at ifgi's Spatial @ WWU series
 
Applying Semantic Extensions And New Services To Drupal Sem Tech June 2010
Applying Semantic Extensions And New Services To Drupal   Sem Tech June 2010Applying Semantic Extensions And New Services To Drupal   Sem Tech June 2010
Applying Semantic Extensions And New Services To Drupal Sem Tech June 2010
 

Similar a Producing, Publishing and Consuming Linked Data Three lessons from the Bio2RDF project

Bio2RDF presentation at Combine 2012
Bio2RDF presentation at Combine 2012Bio2RDF presentation at Combine 2012
Bio2RDF presentation at Combine 2012François Belleau
 
Producing, publishing and consuming linked data - CSHALS 2013
Producing, publishing and consuming linked data - CSHALS 2013Producing, publishing and consuming linked data - CSHALS 2013
Producing, publishing and consuming linked data - CSHALS 2013François Belleau
 
W4 4 marc-alexandre-nolin-v2
W4 4 marc-alexandre-nolin-v2W4 4 marc-alexandre-nolin-v2
W4 4 marc-alexandre-nolin-v2nolmar01
 
Soren Auer - LOD2 - creating knowledge out of Interlinked Data
Soren Auer - LOD2 - creating knowledge out of Interlinked DataSoren Auer - LOD2 - creating knowledge out of Interlinked Data
Soren Auer - LOD2 - creating knowledge out of Interlinked DataOpen City Foundation
 
GBIF web services for biodiversity data, for USDA GRIN, Washington DC, USA (2...
GBIF web services for biodiversity data, for USDA GRIN, Washington DC, USA (2...GBIF web services for biodiversity data, for USDA GRIN, Washington DC, USA (2...
GBIF web services for biodiversity data, for USDA GRIN, Washington DC, USA (2...Dag Endresen
 
Populating DBpedia FR and using it for Extracting Information
Populating DBpedia FR and using it for Extracting InformationPopulating DBpedia FR and using it for Extracting Information
Populating DBpedia FR and using it for Extracting InformationJulien PLU
 
PRELIDA Project Draft Roadmap
PRELIDA Project Draft RoadmapPRELIDA Project Draft Roadmap
PRELIDA Project Draft RoadmapPRELIDA Project
 
Linked Data for the Masses: The approach and the Software
Linked Data for the Masses: The approach and the SoftwareLinked Data for the Masses: The approach and the Software
Linked Data for the Masses: The approach and the SoftwareIMC Technologies
 

Similar a Producing, Publishing and Consuming Linked Data Three lessons from the Bio2RDF project (20)

Bio2RDF presentation at Combine 2012
Bio2RDF presentation at Combine 2012Bio2RDF presentation at Combine 2012
Bio2RDF presentation at Combine 2012
 
Producing, publishing and consuming linked data - CSHALS 2013
Producing, publishing and consuming linked data - CSHALS 2013Producing, publishing and consuming linked data - CSHALS 2013
Producing, publishing and consuming linked data - CSHALS 2013
 
LOD2 Webinar Series: LIMES
LOD2 Webinar Series: LIMESLOD2 Webinar Series: LIMES
LOD2 Webinar Series: LIMES
 
Limes webinar
Limes webinarLimes webinar
Limes webinar
 
W4 4 marc-alexandre-nolin-v2
W4 4 marc-alexandre-nolin-v2W4 4 marc-alexandre-nolin-v2
W4 4 marc-alexandre-nolin-v2
 
LOD2 Webinar Series: SILK
LOD2 Webinar Series: SILKLOD2 Webinar Series: SILK
LOD2 Webinar Series: SILK
 
LOD2 Webinar Series: 3rd relase of the Stack
LOD2 Webinar Series: 3rd relase of the StackLOD2 Webinar Series: 3rd relase of the Stack
LOD2 Webinar Series: 3rd relase of the Stack
 
LOD2 Webinar Series: Zemanta / Open refine
LOD2 Webinar Series: Zemanta / Open refine LOD2 Webinar Series: Zemanta / Open refine
LOD2 Webinar Series: Zemanta / Open refine
 
Planetdata simpda
Planetdata simpdaPlanetdata simpda
Planetdata simpda
 
PlanetData: Consuming Structured Data at Web Scale
PlanetData: Consuming Structured Data at Web ScalePlanetData: Consuming Structured Data at Web Scale
PlanetData: Consuming Structured Data at Web Scale
 
Soren Auer - LOD2 - creating knowledge out of Interlinked Data
Soren Auer - LOD2 - creating knowledge out of Interlinked DataSoren Auer - LOD2 - creating knowledge out of Interlinked Data
Soren Auer - LOD2 - creating knowledge out of Interlinked Data
 
LOD2 - Creating Knowledge out of Interlinked Data - General Presentation
LOD2 - Creating Knowledge out of Interlinked Data - General PresentationLOD2 - Creating Knowledge out of Interlinked Data - General Presentation
LOD2 - Creating Knowledge out of Interlinked Data - General Presentation
 
GBIF web services for biodiversity data, for USDA GRIN, Washington DC, USA (2...
GBIF web services for biodiversity data, for USDA GRIN, Washington DC, USA (2...GBIF web services for biodiversity data, for USDA GRIN, Washington DC, USA (2...
GBIF web services for biodiversity data, for USDA GRIN, Washington DC, USA (2...
 
LOD2 Webinar Series: D2R and Sparqlify
LOD2 Webinar Series: D2R and SparqlifyLOD2 Webinar Series: D2R and Sparqlify
LOD2 Webinar Series: D2R and Sparqlify
 
Gap Analysis
Gap AnalysisGap Analysis
Gap Analysis
 
Populating DBpedia FR and using it for Extracting Information
Populating DBpedia FR and using it for Extracting InformationPopulating DBpedia FR and using it for Extracting Information
Populating DBpedia FR and using it for Extracting Information
 
LOD2 Plenary Vienna 2012: WP3 - Knowledge Base Creation, Enrichment and Repair
LOD2 Plenary Vienna 2012: WP3 - Knowledge Base Creation, Enrichment and RepairLOD2 Plenary Vienna 2012: WP3 - Knowledge Base Creation, Enrichment and Repair
LOD2 Plenary Vienna 2012: WP3 - Knowledge Base Creation, Enrichment and Repair
 
PRELIDA Project Draft Roadmap
PRELIDA Project Draft RoadmapPRELIDA Project Draft Roadmap
PRELIDA Project Draft Roadmap
 
Linked Data for the Masses: The approach and the Software
Linked Data for the Masses: The approach and the SoftwareLinked Data for the Masses: The approach and the Software
Linked Data for the Masses: The approach and the Software
 
LOD2: State of Play WP3A - Knowledge Base Creation, Enrichment and Repair
LOD2: State of Play WP3A - Knowledge Base Creation, Enrichment and RepairLOD2: State of Play WP3A - Knowledge Base Creation, Enrichment and Repair
LOD2: State of Play WP3A - Knowledge Base Creation, Enrichment and Repair
 

Más de François Belleau

Pitch Reactome2json_ld @ swat4hcls 2020
Pitch Reactome2json_ld @ swat4hcls 2020Pitch Reactome2json_ld @ swat4hcls 2020
Pitch Reactome2json_ld @ swat4hcls 2020François Belleau
 
Pitch Qliic coopérathon 2017
Pitch Qliic coopérathon 2017Pitch Qliic coopérathon 2017
Pitch Qliic coopérathon 2017François Belleau
 
2015-11-17 Présentation SEAO et ES
2015-11-17 Présentation SEAO et ES2015-11-17 Présentation SEAO et ES
2015-11-17 Présentation SEAO et ESFrançois Belleau
 
BD2K hackathon - Bio2RDF submission
BD2K hackathon - Bio2RDF submissionBD2K hackathon - Bio2RDF submission
BD2K hackathon - Bio2RDF submissionFrançois Belleau
 
Découvrir le web sémantique en 15 minutes (Decideo 2014)
Découvrir le web sémantique en 15 minutes (Decideo 2014)Découvrir le web sémantique en 15 minutes (Decideo 2014)
Découvrir le web sémantique en 15 minutes (Decideo 2014)François Belleau
 
Bio2RDF poster for Biocurator 2014 conference
Bio2RDF poster for Biocurator 2014 conferenceBio2RDF poster for Biocurator 2014 conference
Bio2RDF poster for Biocurator 2014 conferenceFrançois Belleau
 
Acfas 2013 - Comment publier sur le web sémantique : la méthode de Bio2RDF
Acfas 2013 - Comment publier sur le web sémantique : la méthode de Bio2RDFAcfas 2013 - Comment publier sur le web sémantique : la méthode de Bio2RDF
Acfas 2013 - Comment publier sur le web sémantique : la méthode de Bio2RDFFrançois Belleau
 
Bio2RDF : A Semantic Web Atlas of post genomic knowledge about Human and Mouse
Bio2RDF : A Semantic Web Atlas of post genomic knowledge about Human and MouseBio2RDF : A Semantic Web Atlas of post genomic knowledge about Human and Mouse
Bio2RDF : A Semantic Web Atlas of post genomic knowledge about Human and MouseFrançois Belleau
 
Bio2RDF: Towards A Mashup To Build Bioinformatics Knowledge System
Bio2RDF: Towards A Mashup To Build Bioinformatics Knowledge SystemBio2RDF: Towards A Mashup To Build Bioinformatics Knowledge System
Bio2RDF: Towards A Mashup To Build Bioinformatics Knowledge SystemFrançois Belleau
 

Más de François Belleau (18)

Bio2RDF @ DILS 2008
Bio2RDF @ DILS 2008Bio2RDF @ DILS 2008
Bio2RDF @ DILS 2008
 
Pitch Reactome2json_ld @ swat4hcls 2020
Pitch Reactome2json_ld @ swat4hcls 2020Pitch Reactome2json_ld @ swat4hcls 2020
Pitch Reactome2json_ld @ swat4hcls 2020
 
Show de boucane pour ELK
Show de boucane pour ELKShow de boucane pour ELK
Show de boucane pour ELK
 
Pitch Qliic coopérathon 2017
Pitch Qliic coopérathon 2017Pitch Qliic coopérathon 2017
Pitch Qliic coopérathon 2017
 
2015-11-17 Présentation SEAO et ES
2015-11-17 Présentation SEAO et ES2015-11-17 Présentation SEAO et ES
2015-11-17 Présentation SEAO et ES
 
Linuq 20160130
Linuq 20160130Linuq 20160130
Linuq 20160130
 
textOdossier
textOdossiertextOdossier
textOdossier
 
BD2K hackathon - Bio2RDF submission
BD2K hackathon - Bio2RDF submissionBD2K hackathon - Bio2RDF submission
BD2K hackathon - Bio2RDF submission
 
Découvrir le web sémantique en 15 minutes (Decideo 2014)
Découvrir le web sémantique en 15 minutes (Decideo 2014)Découvrir le web sémantique en 15 minutes (Decideo 2014)
Découvrir le web sémantique en 15 minutes (Decideo 2014)
 
Bio2RDF poster for Biocurator 2014 conference
Bio2RDF poster for Biocurator 2014 conferenceBio2RDF poster for Biocurator 2014 conference
Bio2RDF poster for Biocurator 2014 conference
 
Acfas 2013 - Comment publier sur le web sémantique : la méthode de Bio2RDF
Acfas 2013 - Comment publier sur le web sémantique : la méthode de Bio2RDFAcfas 2013 - Comment publier sur le web sémantique : la méthode de Bio2RDF
Acfas 2013 - Comment publier sur le web sémantique : la méthode de Bio2RDF
 
Bio2RDF@BH2010
Bio2RDF@BH2010Bio2RDF@BH2010
Bio2RDF@BH2010
 
Bio2RDF @ W3C HCLS2009
Bio2RDF @ W3C HCLS2009Bio2RDF @ W3C HCLS2009
Bio2RDF @ W3C HCLS2009
 
Bio2RDF-ISMB2008
Bio2RDF-ISMB2008Bio2RDF-ISMB2008
Bio2RDF-ISMB2008
 
Bio2RDF : A Semantic Web Atlas of post genomic knowledge about Human and Mouse
Bio2RDF : A Semantic Web Atlas of post genomic knowledge about Human and MouseBio2RDF : A Semantic Web Atlas of post genomic knowledge about Human and Mouse
Bio2RDF : A Semantic Web Atlas of post genomic knowledge about Human and Mouse
 
Bio2RDF should we do it
Bio2RDF should we do itBio2RDF should we do it
Bio2RDF should we do it
 
Bio2RDF: Towards A Mashup To Build Bioinformatics Knowledge System
Bio2RDF: Towards A Mashup To Build Bioinformatics Knowledge SystemBio2RDF: Towards A Mashup To Build Bioinformatics Knowledge System
Bio2RDF: Towards A Mashup To Build Bioinformatics Knowledge System
 
Bio2RDF/Virtuoso
Bio2RDF/VirtuosoBio2RDF/Virtuoso
Bio2RDF/Virtuoso
 

Último

08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
Tech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdfTech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdfhans926745
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdflior mazor
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Enterprise Knowledge
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Miguel Araújo
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024The Digital Insurer
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdfhans926745
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024The Digital Insurer
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsJoaquim Jorge
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsEnterprise Knowledge
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
Evaluating the top large language models.pdf
Evaluating the top large language models.pdfEvaluating the top large language models.pdf
Evaluating the top large language models.pdfChristopherTHyatt
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘RTylerCroy
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationMichael W. Hawkins
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Igalia
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfEnterprise Knowledge
 

Último (20)

08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
Tech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdfTech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdf
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Evaluating the top large language models.pdf
Evaluating the top large language models.pdfEvaluating the top large language models.pdf
Evaluating the top large language models.pdf
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 

Producing, Publishing and Consuming Linked Data Three lessons from the Bio2RDF project

  • 1. Producing, Publishing and Consuming Linked Data Three lessons from the Bio2RDF project Background François Belleau (francoisbelleau@yahoo.ca) With the proliferation of new online databases, data integration continues to be one of the major unsolved problems for bioinformatics. In spite of initiatives like BioPAX [1], Biomart [2], the EBI, KEGG and NCBI integrated web resources, the web Lesson # 1 of bioinformatics databases is still a web of independent data silos. Rdfise data using ETL software like Talend. Since 2005, the aim of the Bio2RDF project has been to make popular public datasets available in RDF format; the data description format of the growing Semantic Web. Initially data from OMIM, KEGG, Entrez Gene, along with numerous other resources, were converted to the RDF semantic format. Currently 38 SPARQL endpoints are made available from the Bio2RDF server [3]. This is the workflow producing Bio2RDF project has been the primary source of bioinformatics data in the Linked Data cloud in 2009. Today many triples from Genbank HTML web page about organisations have started to publish their datasets or knowledge bases using the RDF/SPARQL standard. GO, Uniprot and external database references. Reactome were early converts to publishing in RDF. Most recently PDBJ, KEGG, NCBO have started to publish their own data in the new semantic way. From the data integration perspective projects like BioLOD [4] from the Riken Institute and Linked Life Data [5] from Ontotext have pushed the Semantic Web model close to production quality service. The linked Data cloud of bioinformatics is now rapidly growing [6]. The technology incubation phase is over. One question data provider should ask themselves now is : How costly is it to produce and publish data in RDF according to this new paradigm ? And, from the bioinformatician data consumer point of view : How useful can semantic These are the instructions creating triples web technologies be to build the data mashups needed to support a specific knowledge discovery tasks and the from the data flow. needs of domain experts ? These are the questions we answer here by proposing methods for producing, publishing and consuming RDF data, and by sharing the lessons we have learnt while building Bio2RDF. Producing RDF RDF is all about triples, building triples, storing triples and querying triples. A triple is defined by the subject-predicate-object model. If you have used key-value table before, you already know what triples are. A collection of triples define a graph so generic that all data can be represented using it. Every kind of data can be converted in triples from all known formats: HTML, XML, relational database, columns table or key-value representation. Converting data to RDF is so important to build the Semantic Web that it is expressed by a new verbs : triplify or rdfize ! Building the Bio2RDF rdfizers we had to deal with all Expose data as RDF using dereferencable URI those kind of data formats and sources. according to design rule #1 and #2 Lesson #1 Transforming data in RDF is an ETL (Extract Transform Load) task and there are now free and professional frameworks available for this purpose. Talend [7] software is a first class ETL framework, based on Eclipse, generating native java code from a graphical representation of the data transformation workflow. Using this professional quality software to rdfize data is much more Make a SPARQL endpoint public productive than writting Java, Perl, PHP scrits as we use to do in the past. so query can be submited. To build the namespace SPARQL endpoint at Bio2RDF [8], a RDF mashup composed of GO, Uniprot, LSRN, GenBank, Here is a query used to discoverer MIRIAM and Bio2RDF namespace description, we generated RDF from XML, HTML, key/value format file, tabular file and the schema of an unknown triplestore. also an RDF dump. Using Talend ETL framework has made the programming job and quality testing far more efficient. Publish on the Linked Data web The inventor of HTML, Tim Berner Lee, has also define the rules by which the Semantic Web should be designed [9]: 1) Use URIs as names for things 2) Use HTTP URIs so that people can look up those names. 3) When someone looks up a URI, provide useful information, using the standards (RDF*, SPARQL) 4) Include links to other URIs. so that they can discover more things. Building Bio2RDF, we have been early adopters of those rules. The DBpedia project, a version of Wikipedia available in RDF format and through one of the first major public SPARQL endpoints, is at the heart of the Linked Data cloud, it is built using Virtuoso triplestore [10], a first class software, that is free and open-source. Lesson ##2 Lesson 2 To publish semantic data use To publish semanticVirtuoso a triplestore like data use a triplestore like Virtuoso Lesson #2 To publish semantic web data chose a good triplestore and made a SPARQL endpoint available publicly on the Internet. Discover concepts using type ahead search Bio2RDF project has also depended on Virtuoso, and benefits from all the innovation in each new version. Virtuoso not only Full text search query results with ranking offers SPARQL endpoint to submit queries based on the W3C standards, full text search and facet browsing-based user based on the number of connections in the graph interface are available so the RDF graph can be browsed, queried, searched and explored with type ahead completion service. All this from one software product directly out of the box. Sesame [11], 4Store [12], Mulgara [13] and other new projects emerging each year make publishing data over the web a new affordable reality. Consuming triples Why should we start using Semantic Web data and technologies ? Because building a database from public resources on the web is more efficient than the traditional way of creating datawarehouse. The Giant Global Graph (GGG) of the entire Semantic Web is the new datastore you can build your semantic mashup from with the tools of your choice. To answer a high level scientific question from data already available in RDF, you need first to build a specific triplestore that you will eventually be able to query to, and hopefully, will obtain the expected results. Building a specific database just to answer a specific question, this is what semantic data mashup are about. Lesson # 3 Lesson #3 Semantic datasources available from SPARQL endpoint can be consumed in all kind Consume semantic data as you like, of ways to create mashup. using HTTP GET, SOAP services or new tool designed to explore RDF data. For example the following ways of consuming RDF include; (i) SPARQL queries over REST, (ii) dereferenced RDF graph by Using soapUI popular tool [16] URI over HTTP, (iii) SOAP services returning RDF or even better still (iv) the new web services model proposed by SADI you can consume Bio2RDF's SOAP framework [14]. Programming in Java, PHP, Ruby or PERL, using RDF/XML, Turtle or JSON/RDF format is also possible and services returning triples in ntriple format. the needed software get better each year. Its is a wild new world of open technologies you will benefit from and to learn and use. The Bio2RDF project first offered an RDF graph that could be dereferenced by a URI in the form http://bio2rdf.org/omim:602080. Any HTTP GET request will return the RDF version of a document from one of the database we expose as RDF in the format of your choice. Next, you can submit queries directly to one of our public SPARQL endpoints like http://namespace.bio2rdf.org/sparql. Programming a script or designing a workflow with software like Taverna or Talend, you can build your data mashup from the growing semantic web data sources in days, not weeks. To explore the possibilities offered by a triplestore, discover the Bio2RDF SPARQL endpoint about bioinformatics database at http://namespace.bio2rdf.org/fct, submit SPARQL queries to its endpoint at http://namespace.bio2rdf.org/sparql And, if you are a SOAP services user, consume its web services described here http://namespace.bio2rdf.org/bio2rdf/SOAP/services.wsdl. Discussion Combining data from different sources is the main problem of data integration in bioinformatics. The Semantic Web community have addressed this problem for years, now the emergent Semantic Web technologies are mature and ready to be used in production scale systems. The Bio2RDF community think that solving data integration problem in bioinformatics can be solve by applying existing Semantic Web practices. The bioinformatics community could significantly benefit from what is being developed now, in fact our community has done a lot to show that Semantic Web model has a great potential in Using the RelFinder tool [15] solving Life Science problems. By sharing our own Bio2RDF experience and these simple lessons we have learned, we hope it is possible to query RDF graphically and visualise the triplestore's graph. that may be you should give it a try in your next data integration project. Acknowledgements References ● Bio2RDF is a community project available at http://bio2rdf.org ● The community can be joined at https://groups.google.com/forum/?fromgroups#! 1) http://www.biopax.org/ 9) http://www.w3.org/DesignIssues/LinkedData.html forum/bio2rdf 2) http://www.biomart.org/ 10) http://virtuoso.openlinksw.com/dataspace/dav/wiki/Main/ ● This work was done under the supervision of Dr Arnaud Droit, assistant professor and 3) http://www.bio2rdf.org/ 11) http://www.openrdf.org/ director of the Centre de Biologie Computationnelle du CRCHUQ at Laval 4) http://biolod.org/ 12) http://4store.org/ University, where a mirror of Bio2RDF is hosted. 5) http://linkedlifedata.com/ 13) http://www.mulgara.org/ ● Michel Dumontier, from the Dumontier Lab at Carleton University, is also hosting 6) http://richard.cyganiak.de/2007/10/lod/ 14) http://sadiframework.org Bio2RDF server and actually leads the project 7) http://talend.com/ 15) http://www.visualdataweb.org/relfinder.php ● Thanks to all the people member of the Bio2RDF community, and especially Marc- 8) http://namespace.bio2rdf.org/sparql 16) http://www.soapui.org/ Alexandre Nolin and Peter Ansell, initial developers.