AWS Community Day CPH - Three problems of Terraform
Invited Tutorial - Cognitive Design for Artificial Minds AI*IA 2022
1. Cognitive Design for Artificial Minds
Antonio Lieto
Università di Torino, Dipartimento di Informatica, IT
ICAR-CNR, Palermo, IT
November 29th, AI*IA 2022, Udine
Web: https://www.antoniolieto.net
Mastodon: @antoniolieto@fediscience.org
2. Lieto A, 2021, Cognitive Design for Arti
fi
cial Minds, Routledge/Taylor & Francis, London/New York.
3. Driving Questions
- What characterize cognitively inspired AI systems?
- What are examples of cognitively inspired AI systems?
- How do they differ from standard AI systems?
- How can cognitively inspired AI systems be used?
3
5. From Human to Artificial Cognition
5
Inspiration
Why?
Humans (and/or other natural systems) are still,
by far, the best unmatched systems able to solve
a wide-range of problems
6. From Human to Artificial Cognition (and
back)
6
Inspiration
Explanation
8. Cybernetics tradition of the AI
This approach to the study of the artificial borrowed its original inspiration – from a
historical perspective– from the methodological apparatus developed by the scholars in
Cybernetics.
1948 book of Norbert Wiener entitled “Cybernetics: Or Control and Communication
in the Animal and the Machine”.
One of underlying idea of cybernetics was that one of building mechanical models to
simulate the adaptive behavior of natural systems.
(Cordeschi, 2002): “the fundamental insight of cybernetics was in the the proposal of
a unified study of organisms and machines”.
8
9. When a biologically/cognitively inspired computational system/architecture
has an explanatory power w.r.t. the natural system taken as source of
inspiration ?
Which are the requirements to consider in order to design a computational
model of cognition with an explanatory power?
Functionalist vs Structuralist Design Approaches
9
10. Functionalist vs Structuralist Models
Same input-out spec. and surface
resemblance of the internal components
and of their working mechanisms between
arti
fi
cial and natural system
Same input-out spec. + constrained
resemblance of the internal components
and of their working mechanisms between
arti
fi
cial and natural system
Functionalist Models Structuralist Models
continuum
Mechanistic
Explanation
Teleological
Explanation
Functional
Explanation
Evolutionistic
Explanation
IBE
Causal
Explanation
11. Wiener’s “Paradox”
“The best material model of a cat is another or possibly the same cat” (Rosenblueth &
Wiener, ’45)
12. A Design Problem
Z.Pylyshyn (’79): “if we do not formulate any restriction about a model
we obtain the functionalism of a Turing machine. If we apply all the
possible restrictions we reproduce a whole human being”
• A design perspective: between the explanatory level of
functionalism (based on the macroscopic stimulus-response
relationship) and the mycroscopic one of fully structured models
(reductionist materialism) we have, in the middle, a lot of possible
structural models.
12
Functionalist Models Structuralist Models
continuum
13. Many Structural Models
It is possible to build structural models of cognition at different
levels of abstraction.
13
Cognitive Function
(NL Understanding)
Cognitive Processes Neural Structures
Sintax
Morphology
Lexical
Processing…
Biological Plausibility of
Processes
Cognitive Plausibility
of the Processes
1:N 1:N
14. Many Structural Models
Both the presented AI approaches may build structural models of
cognition at different levels of abstraction (having an empirical
adequacy ).
14
Cognitive Function
(NL Understanding)
Cognitive Processes Neural Structures
Sintax
Morphology
Lexical
Processing…
Bio-Physical Plausibility
of the Processes
Cognitive Plausibility
of the Processes
Classical Cognitivism Emergent AI
15. Take home message (part 1)
• Cognitive Artificial Models have an explanatory power
only if they are structurally valid models (realizable in
different ways and empirically adequate).
• Cognitive Artificial Systems built with this design
perspective have an explanatory role for the theory
they implement and the “computational experiment”
can provide results useful for refining of rethinking
theoretical aspects of the natural inspiring system.
16. “Natural/Cognitive” Inspiration and AI
Early AI
Cognitive or Biological Inspiration
for the Design of “Intelligent Systems”
M. Minsky
R. Shank
Modern AI
“Intelligence” in terms of
optimality of a performance
(narrow tasks)
mid‘80s
A. Newell
H. Simon
D. Rumhelart
J. McClelland
N. Wiener
Nowadays:
Renewed attention
“The gap between natural
and artificial
systems is still enormous”
(A. Sloman, AIC 2014).
17. Modern successful AI systems
17
IBM Watson
(symbolic)
Alpha Go (Deep Mind)
(connectionist)
26. GPT-3/Problems
• Text completion is a prediction test, not a test of
compositionality
• Lack of commonsense reasoning
26
from https://cs.nyu.edu/~davise/papers/GPT3CompleteTests.html
35. Minimal Cognitive Grid
“a non subjective, graded, evaluation framework allowing both quantitative and
qualitative analysis about the cognitive adequacy and the human-like performances of
artificial systems in both single and multi-tasking settings.” (Lieto, 2021)
Functional/Structural Ratio
Generality
Performance match (including errors and psychometric measures)
Functionalist Models Structuralist Models
37. Suppose I am not interested in the reverse inference…
Why a cognitive approach??
37
38. Models of Rationality
38
Morgenstern, Von Neumann Simon
Expected Utility Theory Bounded Rationality
decision makers as
optimizers
decision makers as
“satisficers”
40. Models of Rationality
40
Morgenstern, Von Neumann Simon
Expected Utility Theory Bounded Rationality
Kahneman, Tversky Gigerenzer
Cognitive Biases Heuristics
41. Linda Problem
19
A version of the Linda example:
-Linda was young in the ‘70s
-Linda likes the color red
-Linda graduated in philosophy
- Linda is against nuclear power (“green” person)
Linda
Linda is a bankteller
Linda is a feminist and
bankteller
42. Evolutionary shaped heuristics
19
The conjunction fallacy can be interpreted as an example of the strong
tendency of human subjects to resort to prototypical information in
categorization (Non Monotonic Categorization)
A version of the Linda example:
-Pippo weights 200 Kg
-Pippo is 2 metres tall
-Pippo growls and roars
-Pippo has robust teeths
Pippo is a mammal
Pippo is a mammal and is wild
and dangerous
46. Marr Hierarchy/Levels of Analysis
Most
important
Computational
Theory
Representation
& Algorithm
Hardware/
Software
Implementation
Goal, logic, strategy, model
I/O representation, transformation algorithm
Physical realization
Loose
coupling
Loose
coupling
46
47. Cash register
At the computational level, the functioning of the register can be accounted for in terms
of arithmetic (e.g. in terms of the theory of addition): at this level are relevant the
computed function (addition), and such abstract properties of it, as commutativity or
associativity (Marr 1982, p. 23).
The level of representation and algorithm specify the form of the representations and
the processes elaborating them: “we might choose Arabic numerals for the
representations, and for the algorithm we could follow the usual rules about adding the
least significant digits first and `carrying' if the sum exceeds 9” (ibid.).
Finally, the level of implementation has to do with how such representations and
processes are physically realized; for example, the digits could be represented as positions
on a metal wheel, or, alternatively, as binary numbers coded by the electrical states of
digital circuitry
47
48. 5 steps - Resource Rationality
1) Start with a computational-level (i.e. functional) description of an
aspect of cognition formulated as a problem and its optimal solution
48
Lieder & Griffiths (2019)
49. 5 steps - Resource Rationality
1) Start with a computational-level (i.e. functional) description of an
aspect of cognition formulated as a problem and its optimal solution
2) posit which class of algorithms the system might used to
approximately solve this problem, the cost of the computational
resources used by these algorithms and the utility of approximating
the correct solution
49
Lieder & Griffiths (2019)
50. 5 steps - Resource Rationality
1) Start with a computational-level (i.e. functional) description of an
aspect of cognition formulated as a problem and its optimal solution
2) posit which class of algorithms the system might used to
approximately solve this problem, the cost of the computational
resources used by these algorithms and the utility of approximating
the correct solution
3) Find the algorithm in the class that optimally trades off resources
and approximation accuracy
50
Lieder & Griffiths (2019)
51. 5 steps - Resource Rationality
1) Start with a computational-level (i.e. functional) description of an
aspect of cognition formulated as a problem and its optimal solution
2) posit which class of algorithms the system might used to
approximately solve this problem, the cost of the computational
resources used by these algorithms and the utility of approximating
the correct solution
3) Find the algorithm in the class that optimally trades off resources
and approximation accuracy
4) Evaluate the predictions of the resulting rational process model
against empirical data
51
Lieder & Griffiths (2019)
52. 5 steps - Resource Rationality
1) Start with a computational-level (i.e. functional) description of an
aspect of cognition formulated as a problem and its optimal solution
2) posit which class of algorithms the system might used to
approximately solve this problem, the cost of the computational
resources used by these algorithms and the utility of approximating
the correct solution
3) Find the algorithm in the class that optimally trades off resources
and approximation accuracy
4) Evaluate the predictions of the resulting rational process model
against empirical data
5) Refine the computational level theory (step 1) or the assumed
computational architecture and its constraints (step 2) to reduce the
discrepancies
52
Lieder & Griffiths (2019)
54. Other problems
• Cannot learn new knowledge (costs for retraining
unfeasible from a computational, environmental, and
economic point of view.)
• “catastrophic interference” problem: new knowledge
overwrites (rather than integrates) knowledge already
distributed in such network models.
• Integration
54
59. Cognitive Architectures
59
Allen Newell (1990)
Unified Theory of Cognition
A cognitive architecture (Newell, 1990) implements the invariant
structure of the cognitive system.
The work on such systems started in the ‘80s (SOAR (Newell,
Laird and Rosenbloom, 1982)
It captures the underlying commonality between different
intelligent agents and provides a framework from which
intelligent behavior arises.
The architectural approach emphasizes the role of memory in the
cognitive process.
73. 73
Commonsense
knowledge as grounding element of
layers of growing thinking capabilities
bridge between perception and
cognition
Problem solving in layers
75. 75
Istintive Reaction (hears a sound…)
Learned reaction (car recognition)
Deliberate thinking (decides to sprint…)
Reflective thinking (reflect upon her decision)
Self-reflection (reflect about her plans)
Social Level (what my friends…)
92. 16
I.Perception & Attention
1.Psychophysical Judgements
2.Visual Search
3.Eye Movements
4.Psychological Refractory Period
5.Task Switching
6.Subitizing
7.Stroop
8.Driving Behavior
9.Situational Awareness
10.Graphical User Interfaces
II.Learning & Memory
1. List Memory
2. Fan Effect
3. Implicit Learning
4. Skill Acquisition
5. Cognitive Arithmetic
6. Category Learning
7. Learning by Exploration
and Demonstration
8. Updating Memory &
Prospective Memory
9. Causal Learning
> 200 Published Models in ACT-R since 1997
III.Problem Solving & Decision Making
1.Tower of Hanoi
2.Choice & Strategy Selection
3.Mathematical Problem Solving
4.Spatial Reasoning
5.Dynamic Systems
6.Use and Design of Artifacts
7.Game Playing
8.Insight and Scientific
Discovery
IV.Language Processing
1.Parsing
2.Analogy & Metaphor
3.Learning
4.Sentence Memory
V. Other
1.Cognitive Development
2.Individual Differences
3.Emotion
4.Cognitive Workload
5.Computer Generated Forces
6.fMRI
7.Communication, Negotiation,
Group Decision Making
Visit http://act.psy.cmu.edu/papers/ACT-R_Models.htm link.
93. ACT-R
Composed by different integrated modules which
are coordinated by means of a centralised
prodoction rules system
Each module communicates with the others through
its own buffers (a sort of micro-specialized working
memories) and the central system selects its next
actions by taking into account the buffer content
93
98. ACT-R, SOAR, CLARION and LIDA Extended Declarative Memories with
DUAL-PECCS
Salvucci et al. 2014 (DbPedia)
Lieto et al., IJCAI 2015; JETAI 2017
99. E.g. Extending learning strategies in SOAR
•
Lieto et al. 2019, Cognitive Systems Research, Beyond Subgoaling, A dynamic knowledge generation framework
for creative problem solving in cognitive architectures.
100. Community
• AIC workshop on AI and Cognition (started @AI*IA 2013!)
• VISCA Conference on Cognitive Architectures
• ACS Conference on Cognitive Systems
• BICA Conference on Biologically Inspired Cognitive
Architectures
• SOAR workshop (> 40 editions!)
• ACT-R workshop (>25 editions!)
• IJCAI, AAAI, ECAI…
100
101. Summing up…and looking ahead
• Behavioral performances are not sufficient to ascribe cognitive
faculties to AI systems (see Minimal Cognitive Grid)
• Behavioral tests (e.g. Turing Test) don’t say very much about the
actual “intelligence” of a system
• In real world contexts, the gap between natural and artificial
intelligence is still enormous
• Models working on the challenge of integrated intelligence will
play a major role for the development of AI technologies and for
the understanding of mental phenomena.
• Time seems mature now for a renewed collaboration between 2
“sciences of the artificial”: AI and Cognitive Science
101
102. Cognitive Design for Artificial Minds
Antonio Lieto
Università di Torino, Dipartimento di Informatica, IT
ICAR-CNR, Palermo, IT
November 29th, AI*IA 2022, Udine
Web: https://www.antoniolieto.net
Mastodon: @antoniolieto@fediscience.org