Unraveling Multimodality with Large Language Models.pdf
A myth or a vision for interoperability: can systems communicate like humans do?
1. A myth or a vision for
interoperability: can systems
communicate like humans do?
dr Milan Zdravković
Laboratory for Intelligent Production Systems (LIPS)
Faculty of Mechanical Engineering in Niš, University of Niš,
Serbia
Seminar - Interoperability challenges and needs: When Research meets Industry, 3rd June
2013, CRP Henri Tudor, Luxembourg
2. Statement of the problem
• Motivation
– In the future IoT, every “thing” will be a system
• More complexity, less previous agreements and assumptions
• Can one system operate based on the message(s) of
the arbitrary content, sent by the (an)other
(unknown) system(s)?
– It is the problem of systems interoperability, not data,
enterprise, etc..
– How to represent that content and how to reason based
on that content?
Artificial intelligence
3. Illusion of / artificial intelligence
• Turing test (Turing, 1950)
– Test of a machine's ability to
exhibit intelligent behavior
equivalent to, or
indistinguishable from, that of
an actual human
– Turing reduced the problem of
defining intelligence to a simple
conversation
• Example: ELIZA
– Examines users’ comments for
keywords
PARRY
4. If more specific context for illusion is
provided, odds are getting better
• PARRY (1972) attempted to model the
behavior of a paranoid schizophrenic
• Easily passed the Turing test (evaluated by
psychiatrists)
– correct identification only 48 per cent of the time
- a figure consistent with random guessing
Chinese room experiment
5. Chinese room experiment
• Proves that Turing test could
not be used to determine if a
machine can think
• The experiment is the
centerpiece of Searle's
Chinese room argument
which holds that a program
cannot give a computer a
"mind", "understanding" or
"consciousness", regardless
of how intelligently it may
make it behave
Chinese room argument
6. Chinese room argument
• Axioms
– (A1) "Programs are formal (syntactic)."
– (A2) "Minds have mental contents (semantics)."
– (A3) "Syntax by itself is neither constitutive of nor
sufficient for semantics.“
• Problem: What experiment show is that passing a Turing test is
possible without understanding. It does not show that its not
possible to reconstruct or interpret semantics based on the syntax
• Conclusion
– (C1) Programs are neither constitutive of nor sufficient for
minds.
Do systems need to “understand”?
7. OK, systems can(not) be intelligent
(can(not) understand), but is that really
important?
• Turing test is explicitly anthropomorphic
– If our ultimate goal is to create machines that
are more intelligent than people, why should we insist
that our machines must closely resemble people?
– Russell and Norvig: “the goal of aeronautical engineering is
not to make machines that fly exactly like pigeons because
they need to fool other pigeons”
• For example, DL is somewhat close to the knowledge
representation in our minds. But, could it be possible that
knowledge may be represented (or reasoning implemented) in
some other way, by using other kinds of formalisms (not yet
existing)?
Functionalism
8. Functionalism (Putnam, 1960)
• Mental states (beliefs, desires, being in pain, etc.) are
constituted solely by their functional role - that is,
they are causal relations to other mental states,
sensory inputs, and behavioral outputs
• Mental states are able to be manifested in various
systems, even perhaps computers, so long as the
system performs the appropriate functions
– Mental states can be sufficiently explained without taking
into account the underlying physical medium
• Computational theory of mind (Putnam, 1961)
– the mind is a machine that derives output representations
of the world from input representations in a deterministic
(non-random) and formal (non-semantic) way
Reverse Chinese Room argument
9. Reverse Chinese Room argument
• There may exist a system which when
provided with detailed instructions on how to
interpret “sensory inputs”, could be able to
produce corresponding (reasonable)
“behavioral outputs”, or a “mental state”
whatsoever.
Sensory inputs
? Behavioral outputs
Definition of interoperability
10. My favorite definition (of MANY) of
interoperability
• ISO/IEC 2382 defines interoperability as the
• “capability (of the agent) to communicate,
execute programs, or transfer data among
various functional units in a manner that
requires the user (agent) to have little or no
knowledge of the unique characteristics of
those units”.
Sensation
11. Sensation
• Senses are physiological cap
acities of organisms that
provide data for perception
• However, perception and
sensation cannot be
considered in isolation,
because of the filtering
(selection), organizing
(grouping, categorization)
even interpretation
processes
– organizing various stimuli into
more meaningful patterns
Checker shadow illusion
12.
13. Perception
• Brain's perceptual systems
actively and pre-consciously
attempt to make sense of their
input
Distal stimulus
(object)
Input energy Sense
Transduction
Proximal stimulus
(object)
Pattern of
neural activity
Processing
Percept
Mental recreation of
the distal stimulus
Perceptual set
14. Perceptual set (expectancy)
• Predisposition to perceive things
in a certain way
– Experience, expectation,
motivation
• Sensations are, by themselves,
unable to provide a unique
description of the world
– Perception is both bottom-up
(senses) and top-down (perceptual
set) process
• Perceptual bias (negative)
– Epistemic commitment
– For example, referee decisions in a
football game
sael
seal
sail
Grouping
Interpretation of non-word
by using different perceptual
sets
15. Could perception be formalized?
Gestalt laws of grouping (1923)
• Laws that, hypothetically, allow us to predict
the interpretation of sensation
– We tend to order our experience in a manner that
is regular, orderly, symmetric, and simple.
– A major aspect of Gestalt psychology is that it
implies that the mind understands external stimuli
as whole rather than the sum of their parts
• Grouping by proximity, similarity, complementarity
(closure), symmetry, continuity, etc.
Cognition
16. Cognition
• How we know the world
– The term "cognition" refers to all processes by
which the sensory input is transformed, reduced,
elaborated, stored, recovered, and used.
• Include processes, such as memory,
association, concept formation, pattern
recognition, attention, perception, problem
solving, mental imaginery,..
Concept learning
17. Concept learning
• Bruner (1967): "the search for and listing of
attributes that can be used to distinguish
exemplars from non exemplars of various
categories.“
– Trial-and-error
– Based on applied perception rules (not only
identification, but also assumption)
• Explanation-based theory of concept learning
– Mind observes or receives the qualities of a thing,
– Then, it forms a concept which possesses and is
identified by those qualities
– Derived from theory of progressive generalizing
(1986)
• the mind separates information that applies to
more than one thing and enters it into a broader
description of a category of things. This is done by
identifying sufficient conditions for something to fit
in a category
Intensional conceptualization
18. Intensional conceptualization
• Logical positivists: meaning is nothing more or less
than the truth conditions it involves.
• Here, the meaning is explained by using the
references to the actual existing (possibly also
logically explained) things in the world
– By using not only necessary but also sufficient conditions
• The process of the representation of such meanings
is called intensional conceptualization.
Meaning in linguistics
19. Meaning in linguistics
• Meaning is what the sender expresses,
communicates or conveys in its message to the
receiver (or observer) and what the receiver infers
from the current context (Akmajian et al, 1995)
– Different contexts -> different interpretations
– Linguistics context
• how meaning is understood, without relying on intent and
assumptions
• Depend on the expressivity of vocabulary and level of abstraction
– Situational context
• refers to non-linguistic factors which affect the meaning of the
message
Definition of systems interoperability
20. Formalized systems interoperability
(based on Sowa, 2000)
data(p) ∧ system(S) ∧ system(R) ∧
interoperable(S,R) ⇒
∀p
(
(transmitted-from(p,S) ∧ transmitted-to(p,R))
∧
∀q(statement-of(q,S) ∧ p⇒q) ∃q’(statement-
of(q’,R) ∧ p⇒q’ ∧ q’⇔q)
)
Summary of human communication process
21. Human communication as a raw model for
interoperability
SensationSensationPerceptionPerception
CognitionCognition ArticulationArticulation
Providing meaning to
various sensations
In contexts of
perceptual
sets:
motivation,
expectations,
experience,
culture, etc.
Gaining
knowledge and
comprehension
from the
sensations
Storage, reasoning,
problem solving, imagining,
concept learning
Stimulus
sensory energy
Articulating
response
Recipients,
language, means
22. SensationPerception
Cognition Articulation
∃R(system(R))
Requirements for interoperability
Sensation Perception
CognitionArticulation
• Sensation
– “Ask” & “Tell” interface
• Perception
– Grouping, categorization and
selection Laws: Semantic
matching and reasoning
– Perceptual set
• Explicit knowledge
(ontologies)
• Motivation?
Web
servicesOntologies
Query
processing
Semantic
matchingReasoner
• Cognition
– Triple store
– Formalized business rules
– Rules-enabled reasoning
(generalization and
specialization)
– Assertion of new
knowledge
Ontologies
Mappings ∃S(system(S))
∀p (
(transmitted-from(p,S) transmitted-to(p,R))∧ ∧
∀q(statement-of(q,S) p q)∧ ⇒
∃q’(statement-of(q’,R) p q’ q’ q)∧ ⇒ ∧ ⇔
) ⇒ interoperable(S,R)
Implementation of interoperable systems
23. Cn
C1
C2
Implementing interoperable systems
OL1
OD1
OL2
ML1D1
ML2D1
MO1O2≡f(ML1D1 , ML2D1)
S1
S2
MLnD1
Sn
OLn
MO1On≡f(ML1D1 , MLnD1)
OD2
Si
OLi
MLiD2
MD1D2
MO1Oi≡f(ML1D1 , MD1D2, MLiD2)
• S1-Sn – Enterprise Information
Systems
• OL1-OL2 – Local ontologies
• OD1,2 – Domain ontologies
• MLiDi – Mappings between local
and domain ontologies
“Human” ontology
24. What makes the good (“human”)
ontology (1/2) ?
• Well-defined scope
– Provided context for communication, by set of
competency questions
– Think of ontology as a perceptual set
• In situational (motivation) and linguistics (expressivity, related to
domain(s)) context
– More implicit, the better ?
• Intensional approach to conceptualization
– Remember explanation-based theory of concept learning?
• Epistemic commitment
– Obligation to uphold the factual truth of a given
proposition and to provide reasons for one’s belief in that
proposition, irrespectively of the context
25. What makes the good (“human”)
ontology (2/2) ?
• Taxonomy
– Referring to “internal” or “external” concepts
– Remember progressive generalization?
• No ontology is an island
– Mappings with the concepts of other ontologies in
horizontal (expressivity) and vertical (level of abstraction)
direction
• Meta-ontologies
– Complement the DL expressivity with new representation
and inference methodologies and strategies
“Human” ontology continuum
27. Some future challenges
• Methodology issues
– Semantic vs. semantically-facilitated
interoperability
– Avoiding Yet-Another-Ontology (YAO) syndrome
– Is expressivity of DL sufficient to facilitate efficient
and effective reasoning and/or semantic
matching?
• Technical issues
– How to make systems and local ontologies work
together?
28. Thank you for your attention
dr Milan Zdravković
milan.zdravkovic@gmail.com
http://www.masfak.ni.ac.rs/milan.zdravkovic
Laboratory for Intelligent Production Systems (LIPS)
Faculty of Mechanical Engineering in Niš, University of Niš,
Serbia