This is a presentation made for a phd course based on the work presented by Pimentel, C.F. and Cravo, M.R. on their paper, "Goal based denial and wishful thinking" .
More about this work can be found here,
http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6496211
2. The Problem
People make plans
People deny a reality they do not like
People like to think the best about their
future
People share their beliefs with fellows
People get motivated and influenced by their
friends and family
3. The Problem
People make plans
People deny a reality they do not like
People like to think the best about their
future
People share their beliefs with fellows
People get motivated and influenced by their
friends and family
Are agents able to behave in a similar
way?
Multiple Benefits
5. 5
The Coordinated Attack Problem
(aka, Two Generals’ or Warring Generals
Problem)
❏ Two generals standing on opposite hilltops, trying to
coordinate an attack on a third general in a valley
between them.
❏ Communication is via messengers who must travel
across enemy lines (possibly get caught).
❏ If a general attacks on his own, he loses.
❏ If both attack simultaneously, they win.
❏ What protocol can ensure simultaneous attack?
7. 7
The Coordinated Attack Problem
(A Naive Protocol)
❏ Let us call the generals:
❏ S (sender)
❏ R (receiver)
❏ Protocol for general S:
❏ Send an “attack” message to R
❏ Keeps sending until acknowledgement is received
❏ Protocol for general R:
❏ Do nothing until he receives a message “attack”
from S
8. 8
The Coordinated Attack Problem
(States)
❏ State of general S:
❏ A pair (msgS, ackS) where msg ∈ {0,1}, ack ∈ {0,1}
❏ msgS = 1 means a message “attack” was sent
❏ ackS = 1 means an acknowledgement was received
❏ State of general R:
❏ A pair (msgR, ackR) where msg ∈ {0,1}, ack ∈ {0,1}
❏ msgR = 1 means a message “attack” was received
❏ ackR = 1 means an acknowledgement was sent
❏ Global state: <(msgS, ackS),(msgR, ackR)>
9. 9
The Coordinated Attack Problem
(Possible Worlds)
❏ Initial global state: <(0,0),(0,0)>
❏ State changes as a result of:
❏ Protocol events
❏ Nondeterministic effects of nature
❏ Change in states captured in a history
❏ Example:
❏ S sends a message to R, R receives it and sends an
acknowledges, which is then received by S
❏ <(0,0),(0,0)>, <(1,0),(1,0)>, <(1,1),(1,1)>
❏ In our model: possible world = possible history
10. 10
The Coordinated Attack Problem
(Indistinguishable Worlds)
❏ Defining the accessibility relation Ri:
❏ Two histories are indistinguishable to agent i if their final
global states have identical local states for agent i
❏ Example: world
❏ <(0,0),(0,0)>, <(1,0),(1,0)>, <(1,0),(1,1)> is indistinguishable to
general S from this world:
<(0,0),(0,0)>, <(1,0),(0,0)>, <(1,0),(0,0)>
❏ In words: S sends a message to R, but does not get an
acknowledgement. This could be because R never received the
message, or because he did but his acknowledgement did not
11. 11
The Coordinated Attack Problem
(What do generals know?)
❏ Suppose the actual world is:
❏ <(0,0),(0,0)>, <(1,0),(1,0)>, <(1,1),(1,1)>
❏ In this world, the following hold:
❏ KSattack
❏ KRattack
❏ KSKRattack
❏ Unfortunately, this also holds:
❏ ¬KRKSKRattack
❏ R does not known that S knows that R knows that S intends to
attack. Why? Because, from R’s perspective, the message could
12. 12
The Coordinated Attack Problem
(What do generals know?)
❏ Possible solution:
❏ S acknowledges R’s acknowledgement
❏ Then we have:
❏ KRKSKRattack
❏ Unfortunately, we also have:
❏ ¬KSKRKSKRattack
❏Is there a way out of this?
13. Motivation
Belief revision is the process of changing beliefs to take into account a new piece of information. The logical
formalization of belief revision is researched in philosophy, in databases, and in artificial intelligence for the design
of rational agents.
❏ Why should an agent always prefer new information
over its previous beliefs?
❏ How can an agent autonomously generate its own
order(s)among beliefs?
❏ Can human-like preferences, in belief revision, be
adequately expressed using an order (or orders) among
beliefs?
15. Changes in the World: The idea
❏ Interpretation of a belief set B
❏ the set of possible worlds where B is true
❏ Notification of some change in the actual world
❏ The agent’s description of the possible states of affairs
must be modified accordingly:
❏ Our description of the actual world is typically incomplete, which
means that there are several states of affairs (possible worlds) that
are consistent with what we believe. Hence, an update must ensure
that the changes are made true in the “candidate worlds” that
survive the update.
16. Changes in the World: The idea
❏ Interpretation of a belief set B
❏ the set of possible worlds where B is true
❏ Notification of some change in the actual world
❏ The agent’s description of the possible states of affairs
must be modified accordingly:
❏ Our description of the actual world is typically incomplete, which
means that there are several states of affairs (possible worlds) that
are consistent with what we believe. Hence, an update must ensure
that the changes are made true in the “candidate worlds” that
survive the update.
Wishful
Thinking
Denial
Passive
Active
17. WTR Agent Definition
If ag is the agent using WTR, our model assumes that its
internal state contains, among other items, the following
information:
❏ The agent’s knowledge base, represented by KB(ag)
❏ The agent’s goals, represented by Goals(ag)
❏ For each other agent agi, the subjective credibility
that our agent associates with agi, represented by
Cred(ag, agi)
❏ The agent’s wishful thinking coefficient, represented
18. Reasoning
❏ Monotonic Reasoning
❏ KB |= f, then "g, KB Ù g |= f
❏ Inference engine only performs ask and tell to the KB, never retract
❏ Non-monotonic reasoning
❏ Allows KB |= f, and then KB Ù g |¹ f
❏ Previously derived facts can be retracted upon arrival (for example
from sensors) of new, conflicting evidence
19. KB(ag) = { <A,Obs,{A}>,
<A → B, Peter, {A → B}>,
< B → C, Susan, { B → C}> }
KB(ag) = { <A,Obs,{A}>,
<A → B, Peter, {A → B}>,
< B → C, Susan, { B → C}>,
< B, Der, {A, A → B}>,
< C, Der, {A, A→ B, B → C}> }
Agent Trusts
Peter to a certain
degree of Belief
and Susan to
another
20. Collected Data(β0) Wishful Thoughts(γ0)
Context(βγ) Wishful
Beliefs(γ)
Base Beliefs(β)
Derived Beliefs
World Goals
Observatory
Communication
Supports
WT
Supports
Valid
Supports
24. Related Approaches
❏ Epistemic Modal Logic with Belief Operator
❏ Belief Revision (family of) Logics
❏ Multi-Agent Systems
❏ Cognitive Agents
❏ Description Logics Reasoning
25. Future
Conscious Agents
Synesthesia
Query Optimisations
Automated Web Agents
Tim Berners-Lee originally expressed the vision of the Semantic Web as follows:
I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web – the content, links, and
transactions between people and computers. A "Semantic Web", which makes this possible, has yet to emerge, but when it does, the day-
to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The "intelligent agents"
people have touted for ages will finally materialize.