Abstract. The research project MEDICO aims at developing an intelligent, robust and scalable semantic search engine for medical images and is designated for different kinds of users, such as medical doctors, medical IT professionals, patients and citizens, and policy makers. Since semantic search results are not always self-explanatory various kinds of explanation are necessary to satisfy different user goals. Our prime concern is to provide intuitive justifications for inexperienced users in the medical domain using semantic networks as form of depiction. In addition, we provide several interaction styles enabling a deeper insight into the medical knowledge.
DSPy a system for AI to Write Prompts and Do Fine Tuning
Explaining Semantic Search Results of Medical Images in MEDICO
1. Explaining
Semantic Search Results of
Medical Images in MEDICO
Björn Forcher, Manuel Möller, Michael Sintek, and
Thomas Roth-Berghofer
Mittwoch, 15. Juli 2009
3. „Trust me. I know
what I am doing!“
Mittwoch, 15. Juli 2009
4. „Trust me. I know
what I am doing!“
Mittwoch, 15. Juli 2009
5. Goal of Medico Project
Development of
• intelligent
• robust and
• scalable
semantic search engine
for medical images
Mittwoch, 15. Juli 2009
6. Goal of Medico Project
Development of
• intelligent
• robust and
• scalable
semantic search engine
for medical images
Mittwoch, 15. Juli 2009
7. Goal of Medico Project
Development of
• intelligent
• robust and
• scalable
semantic search engine
for medical images
Mittwoch, 15. Juli 2009
8. RadSem
• Tool to support medical doctors (esp. radiologists) in
annotating and searching for medical images (and text)
• Part of the MEDICO project (funded by BMWi in the
research programme THESEUS)
• Developed together with medical experts
(who have to use the tool to annotate real images)
5
Mittwoch, 15. Juli 2009
9. Intended Users of
RadSem
• Medical doctors
• Medical IT professionals
• Patients and citizens
• Policy makers
Mittwoch, 15. Juli 2009
15. Foundational Model of
Anatomy FMA
• developed and maintained by Structural
Informatics Group at University of Washington
• contains more than 70.000 anatomical entities
(classes)
• more than 1.5 million relations between the
entities
• most comprehensive human ontology
9
Mittwoch, 15. Juli 2009
16. ICD-10 in OWL
• Problem: No disease terminology available in OWL
• Established standard: International Classification of
Diseases (WHO), but only available in semi-
structured formats
• Approach: Crawler for online version of ICD-10
generates light-weight
OWL ontology
10
Mittwoch, 15. Juli 2009
18. Example
annotation Region of
Interest
• FMA
• ICD 10
Mittwoch, 15. Juli 2009
19. Example
annotation Region of
Interest
• FMA
• ICD 10
Mittwoch, 15. Juli 2009
20. Example
annotation Region of
Interest
• FMA
• ICD 10
Mittwoch, 15. Juli 2009
21. Example
annotation Region of
Interest
• FMA
• ICD 10
Mittwoch, 15. Juli 2009
22. Explainer
User Interface
Originator
Basic explanation scenario
Mittwoch, 15. Juli 2009
23. Explainer
User Interface
Originator
Problem solving
Basic explanation scenario knowledge
Mittwoch, 15. Juli 2009
24. Explanation
knowledge
Explainer
User Interface
Originator
Problem solving
Basic explanation scenario knowledge
Mittwoch, 15. Juli 2009
25. Explainer
User Interface
Originator
Basic explanation scenario
Mittwoch, 15. Juli 2009
26. Explainer
User Interface
Originator
Semantic Search
Basic explanation scenario
Mittwoch, 15. Juli 2009
27. • Query
expansion
with ontology
concepts
• Count path
Explainer
length from
search to
User Interface found
concept
Originator
Semantic Search
Basic explanation scenario
Mittwoch, 15. Juli 2009
28. Motivations for
explanations in RadSem
• Test whether the Search Engine works
correctly
• Test whether the ontologies are
correctly modelled
• Learn about the medical domain
• Justify results in order to increase
trust
Mittwoch, 15. Juli 2009
29. Motivations for
explanations in RadSem
• Test whether the Search Engine works
correctly Medical IT
• Test whether the ontologies are professionals
correctly modelled
• Learn about the medical domain
• Justify results in order to increase
trust
Mittwoch, 15. Juli 2009
30. Motivations for
explanations in RadSem
• Test whether the Search Engine works
correctly Medical IT
• Test whether the ontologies are professionals
correctly modelled
• Learn about the medical domain
Patients and
• Justify results in order to increase citizens
trust
Mittwoch, 15. Juli 2009
31. Motivations for
explanations in RadSem
• Help users to improve their search
• Activate passive knowledge
• Users learn how to use the engine
concerning ontologies
Mittwoch, 15. Juli 2009
32. Motivations for
explanations in RadSem
• Help users to improve their search
Medical
• Activate passive knowledge doctors
• Users learn how to use the engine
concerning ontologies
Mittwoch, 15. Juli 2009
33. Motivations for
explanations in RadSem
• Help users to improve their search
Medical
• Activate passive knowledge doctors
• Users learn how to use the engine Patients and
concerning ontologies citizens
Mittwoch, 15. Juli 2009
34. What are
explanations?
Mittwoch, 15. Juli 2009
35. What are
explanations?
Explanations are answers
to questions.
Mittwoch, 15. Juli 2009
37. When are questions
being asked?
Whenever expectations
are not met.
Mittwoch, 15. Juli 2009
38. Explanation goals
• Transparency How did the system reach an answer?
• Justification Why is the answer a good answer?
• Relevance Why is the question relevant?
• Conceptualisation What is the meaning of a concept?
• Learning Teach the user about the given domain.
Sørmo, F., Cassens, J., Aamodt, A.: Explanation in
Case-Based Reasoning – Perspectives and Goals, 2005.
Mittwoch, 15. Juli 2009
39. When are explanations
good explanations?
• Short and easy to overlook
• Innovative
• Relevant
• Convincing
• Different perspectives and
follow-up questions
• Natural
W. R. Swartout and J. D. Moore. Explanation in second generation expert systems.
In J. David, J. Krivine, and R. Simmons, editors, Second Generation Expert
Systems, pages 543–585, Berlin, 1993. Springer Verlag.
Mittwoch, 15. Juli 2009
40. When are explanations
good explanations?
• Short and easy to overlook
• Innovative
• Relevant
• Convincing
• Different perspectives and
follow-up questions
• Natural
W. R. Swartout and J. D. Moore. Explanation in second generation expert systems.
In J. David, J. Krivine, and R. Simmons, editors, Second Generation Expert
Systems, pages 543–585, Berlin, 1993. Springer Verlag.
Mittwoch, 15. Juli 2009
41. Kinds of explanations
• Action explanations and justifications:
„How do search concepts relate
to found concepts?“
• Concept explanations
Mittwoch, 15. Juli 2009
42. Action explanations
• Action explanations explain the activities of
the respective system (originator).
Action explanations:
“Why was this seat post selected?” –
“For the given price, only one other seat
post was available. But this was too
short.
• In RadSem: Reconstructive explanations based
on search and found concepts.
Mittwoch, 15. Juli 2009
43. Why-explanations
• Why-explanations provide causes or justifications for
facts or events.
• Examples:
• Justification: “Why does the universe expand?” – “Because we
can observe a red shift of the light emitted by other galaxies.”
• Cause: “Because the whole matter was concentrated at one
point of the universe and because the whole matter moves away
from each other
Mittwoch, 15. Juli 2009
44. Concept Explanations
• The goal of concept explanations is to build links between
unknown and known concepts.
• Variations:
• Definition: “What is a bicycle?” – “A bicycle is a land vehicle
with two wheels in line. Bicycles are a form of human powered
vehicle.”
• Functional mapping: “What is a bicycle?” – “A bicycle serves
as a means of transport.”
• Prototypical usage of individual things or actions:
“What is a bicycle?” – “The thing, that man over there just crashed
with.”
• …
Mittwoch, 15. Juli 2009
45. Explainer
User Interface
Originator
Semantic Search
Basic explanation scenario
Mittwoch, 15. Juli 2009
46. • Dijkstra
algorithm
estimates
semantic
search
Explainer
User Interface
Originator
Semantic Search
Basic explanation scenario
Mittwoch, 15. Juli 2009
55. User experiment wrt
explanations in RadSem
• Test whether the Search Engine works
correctly Medical IT
• Test whether the ontologies are professionals
correctly modelled
• Learn about the medical domain
Patients and
• Justify results in order to increase citizens
trust
Mittwoch, 15. Juli 2009
56. User experiment wrt
explanations in RadSem
• Test whether the Search Engine works
correctly Medical IT
• Test whether the ontologies are professionals
correctly modelled
• Learn about the medical domain
Patients and
• Justify results in order to increase citizens
trust
→ Results supported our motivations
for providing explanations.
Mittwoch, 15. Juli 2009
57. Future Work
• Selection of proper labels wrt different
user groups
• Search for alternative paths
• Exploration of paths
• Tailoring of paths
• Dictionary for lexical concepts
• Links to Wikipedia
Mittwoch, 15. Juli 2009
59. Take home messages
• RadSem is a complex annotation and search tool.
Mittwoch, 15. Juli 2009
60. Take home messages
• RadSem is a complex annotation and search tool.
• Goals and kinds of explanations are a useful tool in
designing a software system in an
explanation-aware manner.
Mittwoch, 15. Juli 2009
61. Take home messages
• RadSem is a complex annotation and search tool.
• Goals and kinds of explanations are a useful tool in
designing a software system in an
explanation-aware manner. Explainer
• Basic explanation scenario helps User
identify communication partners
Originator
Mittwoch, 15. Juli 2009
62. Take home messages
• RadSem is a complex annotation and search tool.
• Goals and kinds of explanations are a useful tool in
designing a software system in an
explanation-aware manner. Explainer
• Basic explanation scenario helps User
identify communication partners
Originator
• Exploration interface with
concept explanations support domain understanding.
Mittwoch, 15. Juli 2009
63. Take home messages
• RadSem is a complex annotation and search tool.
• Goals and kinds of explanations are a useful tool in
designing a software system in an
explanation-aware manner. Explainer
• Basic explanation scenario helps User
identify communication partners
Originator
• Exploration interface with
concept explanations support domain understanding.
• Justification interface provides action explanations,
which counteract encapsulation and information hiding.
Mittwoch, 15. Juli 2009
64. Thank you!
Explaining
Semantic Search Results of
Medical Images in MEDICO
Thomas Roth-Berghofer
Senior researcher, trb@dfki.de
German Research Centre for Artificial Intelligence DFKI GmbH
Mittwoch, 15. Juli 2009