SlideShare una empresa de Scribd logo
1 de 54
Descargar para leer sin conexión
Update and Abstraction
in Model Checking of
Knowledge and Branching Time
N.V.Shilov, N.O.Garanina
Introduction
Combinations of traditional
program logics with logics of knowledge for
reasoning about multiagent systems.
The model checking problem in perfect
recall trace-based environments for
pairwise fusion of the logics:
Introduction
Program logics
 Elementary Propositional Dynamic Logic (EPDL)
 Computation Tree Logic with actions (Act-CTL)
 The propositional µ-Calculus (µC)
with epistemic logics
 Propositional Logic of Knowledge (PLK)
 Propositional Logic of Common Knowledge (PLC)
Introduction
This model checking problem
 is PSPACE-complete for EPDL-C,
 is non-elementary decidable for Act-CTL-K,
 is undecidable for Act-CTL-C,
µPLK and µPLC.
Introduction
Update+abstraction algorithm
for model checking Act-CTL-K in
perfect recall synchronous settings.
Parameters of algorithm complexity:
 number of agents,
 number of states,
 knowledge depth,
 formula size.
Introduction
We define:
 the knowledge depth for formulas of
Act-CTL-Kn,
 sublogics Act-CTL-Kk
n with a bounded
knowledge depth k ≥ 0,
 k-trees,
 knowledge update function Gk
a
on k-
trees for every action a.
Introduction
We suggest:
 an algorithm that transforms
Act-CTL-Kk
n into Act+n
-CTL,
 k-trees + update functions →
finite Kripke structure ↔
original perfect recall environment,
 the resulting model checking algorithm
solves Act+n-CTL on k-trees.
Background Logics
Syntax:
 true, false — Boolean constants,
 Prp — propositional variables,
 Rlt — relational symbols,
 ¬, ∧, ∨ and some modalities.
Background Logics
Kripke structure is a triple (DM,IM,VM),
where
 the domain DM — a nonempty set of
possible worlds,
 the interpretation IM: Rlt  2DM×DM,
 the valuation VM: Prp  DM.
Background Logics
Semantics:
 w=Mtrue and w=Mfalse,
 w=M p iff w∈VM(p) for p∈Prop,
 w=M ¬ϕ iff w=M ϕ,
 w=M ϕ ∧ ψ iff w=M ϕ and w=M ψ,
 w=M ϕ ∨ ψ iff w=M ϕ or w=M ψ,
 definition of modalities is specific.
Background Logics
Propositional Logic of Knowledge PLK:
 Alphabet of relational symbols — [1..n].
 Syntax:
Ki ϕ and Si ϕ, i ∈[1..n] and ϕ — a formula.
 Interpretation IM(i) is an equivalence.
 (DM, ∼,… ∼, VM) with IM(i) = ∼.
1 n i
Background Logics
Semantics:
 w=MSi ϕ iff
for some w’: w ∼ w’ and w’=M ϕ,
 w=MKi ϕ iff
for every w’: w ∼ w’ implies w’=M ϕ.
i
i
Background Logics
Computational Tree Logic with Actions Act-CTL:
 Alphabet of relational symbols —
action symbols Act.
 Syntax: AXa
ϕ, EXa
ϕ, AGa
ϕ, AFa
ϕ, EGa
ϕ, EFa
ϕ,
AϕUa
ψ, and EϕUa
ψ.
 a-trace — (w1 ... wj wj+1 ...)
with (wj,wj+1)∈IM(a) for every j.
 a-run — a maximal a-trace.
Background Logics
Semantics:
 w=M AXa
ϕ iff ws2=Mϕ for every a-run
ws ∈DM* with ws1=w,
 w=MAGa
ϕ iff wsj=M ϕ for every a-run
ws ∈DM* with ws1=w and every 1≤j≤|ws|,
 w=MAFa
ϕ iff wsj=M ϕ for every a-run
ws ∈DM* with ws1=w and some 1≤j≤|ws|,
Background Logics
Semantics:
 w=MA(ϕ Ua
ψ) iff wsj=M ϕ and wsk=M ψ
for every a-run ws ∈DM* with ws1=w,
for some 1≤k≤|ws| and every 1≤j<k.
Combining Knowledge
and Branching Time
Computational Tree Logic with
Actions and Knowledge Act-CTL-Kn:
 [1..n] — set of agents (n > 0),
 Act — action symbols.
 Syntax:
— true (false), Prp, ¬, ∧, ∨,
— knowledge modalities Ki and Si for i ∈ [1..n],
— branching-time constructs for a ∈Act
AXa
, EXa
, AGa
, AFa
, EGa
, EFa
, AUa
, EUa
.
Combining Knowledge
and Branching Time
 An environment is a tuple
E =(D, ∼, … ∼, I, V) with
(D, ∼, … ∼, V) — a model for PLKn and
(D, I, V) — a model for Act-CTL.
 E (ϕ) = { w | w= ϕ}.
1 n
1 n
Combining Knowledge
and Branching Time
A trace-based Perfect Recall Synchronous
environment
PRS(E) = (DPRS, ∼, … ∼, IPRS,VPRS):
 DPRS is the set of all pairs (ws, as), where
ws ∈D+
, as ∈Act*
, |ws| = |as|+1, and
(wsj, wsj+1) ∈I(asj) for every j∈[1..|as|];
 for every p ∈Prp and (ws,as) ∈DPRS,
(ws,as) ∈VPRS (p) iff ws|ws|∈VE (p);
n1
prsprs
Combining Knowledge
and Branching Time
 for every i ∈ [1..n] and
for all (ws',as'), (ws",as")∈DPRS,
(ws',as') ∼ (ws",as") iff
as'=as" and ws‘j ∼ws"j for every j∈[1..|ws'|];
w’1 → w’2 → … → w’m-1 → w’m
i prs
w”1 → w”2 → … → w”m-1 → w”m
prs
i
i
a1
a2
am-2 am-1
a1 a2 am-2
am-1
Combining Knowledge
and Branching Time
 for every i ∈ [1..n] and
for all (ws',as'), (ws",as")∈DPRS,
(ws',as') ∼ (ws",as") iff
as'=as" and ws‘j ∼ws"j for every j∈[1..|ws'|];
w’1 → w’2 → … → w’m-1 → w’m
i
w”1 → w”2 → … → w”m-1 → w”m
prs
i
i
a1
a2
am-2 am-1
a1 a2 am-2
am-1
Combining Knowledge
and Branching Time
 for every i ∈ [1..n] and
for all (ws',as'), (ws",as")∈DPRS,
(ws',as') ∼ (ws",as") iff
as'=as" and ws‘j ∼ws"j for every j∈[1..|ws'|];
w’1 → w’2 → … → w’m-1 → w’m
i
w”1 → w”2 → … → w”m-1 → w”m
prs
i
i
a1
a2
am-2 am-1
a1 a2 am-2
am-1
Combining Knowledge
and Branching Time
 for every i ∈ [1..n] and
for all (ws',as'), (ws",as")∈DPRS,
(ws',as') ∼ (ws",as") iff
as'=as" and ws‘j ∼ws"j for every j∈[1..|ws'|];
w’1 → w’2 → … → w’m-1 → w’m
i
w”1 → w”2 → … → w”m-1 → w”m
prs
i
i
a1
a2
am-2 am-1
a1 a2 am-2
am-1
Combining Knowledge
and Branching Time
 for every i ∈ [1..n] and
for all (ws',as'), (ws",as")∈DPRS,
(ws',as') ∼ (ws",as") iff
as'=as" and ws‘j ∼ws"j for every j∈[1..|ws'|];
w’1 → w’2 → … → w’m-1 → w’m
i
w”1 → w”2 → … → w”m-1 → w”m
prs
i
i
a1
a2
am-2 am-1
a1 a2 am-2
am-1
Combining Knowledge
and Branching Time
 for every a ∈Act and
for all (ws',as'), (ws",as")∈DPRS,
((ws',as'), (ws",as")) ∈ IPRS(a) iff
as'°a =as", ws" = ws'°w", and
(w', w")∈IE(a), where w'=ws’|ws’|;
w’1 → w’2 → … → w’m-1 → w’m
a↓
w’1 → w’2 → … → w’m-1 → w’m → w”
a1 a2 am-2 am-1
a1
a2 am-2 am-1 a
Combining Knowledge
and Branching Time
 for every a ∈Act and
for all (ws',as'), (ws",as")∈DPRS,
((ws',as'), (ws",as")) ∈ IPRS(a) iff
as'°a =as", ws" = ws'°w", and
(w', w")∈IE(a), where w'=ws’|ws’|.
Combining Knowledge
and Branching Time
Example.
Guess Numbers Puzzle GNP(N,M) (N,M ≥ 0):
 Orbiter — referee,
Eloise and Abelard — two players.
 Abelard selects a hidden number h∈[1..N];
 Abelard never reports the hidden value to
Eloise.
Combining Knowledge
and Branching Time
 Eloise selects an initial value s∈[1..N]
for a personal counter;
 Eloise can increase or decrease
counter value by 10, 5 or 1 while in
the range [1..N];
 Eloise never reports the counter
values to Alelard.
Combining Knowledge
and Branching Time
 Orbiter reports to both players whether
the new value of the personal counter s is
less, equal, or greater then the hidden
number h.
 Can Eloise and Abelard simultaneously
learn the hidden value h and the initial
value s respectively after M arithmetic
steps?
Combining Knowledge
and Branching Time
 Two agents in the puzzle —
E (Eloise) and A (Abelard).
 Space
D=[0..N]×[1..N]×{<, >, =, out, ini}×[1..N]:
 [0..N] — an auxiliary counter c,
 [1..N] — values of the personal counter s,
 {<, >, =, out, ini } — results of comparisons,
 [1..N] — the hidden value h.
 Actions — (σn), for σ ∈{+,-} and n ∈{1,5,10}.
Combining Knowledge
and Branching Time
Knowledge Acquisition.
Combining Knowledge
and Branching Time
 Agent E can get knowledge about the
hidden value from a sequence of states that
finishes with a state with equality sign.
 Agent A can get knowledge about the initial
value from a sequence of operations that
generates these sequence of states.
Combining Knowledge
and Branching Time
 next = ∪(σn), σ∈{+,-}, n ∈{1,5,10}
EFnext((c ≤M) ∧
∨h ∈[1..N]KE (hidden value is h) ∧
∨s ∈ [1..N]KA (initial value is s))
Bounded Knowledge Update
 The model checking problem for
Act-CTL-Kn in
perfect recall synchronous environments is
decidability and complexity of the set
CHECK(Act-CTL-Kn) ≡
{(E, (ws,as), ϕ) |
E — a finite environment, (ws,as) ∈ DPRS,
ϕ — a formula of Act-CTL-Kn,
(ws,as)=PRS ϕ }.
Bounded Knowledge Update
Complexity parameters:
 E = (D, ∼,... ∼, I, V) — a finite environment,
 d — the number of worlds in D;
 r — the number of edges in E;
 m = (d +r);
 l (ws,as)=|ws|;
 fϕ — the size of ϕ ∈Act-CTL-Kn.
 Overall complexity — t =(m +l (ws,as) +fϕ).
1 n
Bounded Knowledge Update
 Proposition 1
For all n >1 and Act ≠Ø
CHECK(Act-CTL-Kn) is decidable
with lower bound
22
…2
}O(t)
,
where t is the overall complexity of
the input.
Bounded Knowledge Update
 The knowledge depth of a formula is
the maximal nesting of knowledge
operators in that formula.
 Act-CTL-Kk
n — logics with a bounded
knowledge depth k ≥0.

Act-CTL-Kn = ∪k ≥0 Act-CTL-Kk
n.
Bounded Knowledge Update
 Tk — k-trees over E,
 Fk — forests of k-trees over E (k ≥0).
 T0 ={(w, ∅,... ∅) | w∈D,
the number of copies of emptyset — n},
 Fk=2Tk,
 Tk+1={(w,U1,...Un) | w∈D and Ui ∈Fk, i ∈[1..n]},
 T = ∪k ≥0Tk .
Bounded Knowledge Update
k-tree for GN(100,4)
Bounded Knowledge Update
 k-tree — finite tree of height k,
• vertices — worlds of the environment,
• edges — agents;
 In a tuple (w, U1, ... Un)
• world w — actual state of the universe,
• the set Ui — knowledge of the agent i;
 0-tree: (w, ∅,... ∅) — world w;
 1-tree: Ui — knowledge about the universe;
 k-tree: Ui — knowledge about the universe
and knowledge of the other agents.
Bounded Knowledge Update
 Proposition 2
Let k ≥ 0 be an integer and E be a finite
environment for n agents with d states.
Then
 the number CK of k-trees over E
CK ≤ exp(n ×d,k)/n;
 if n <d, then the number NK of
nodes in every k+1-tree over E
NK < (CK)2
.
Bounded Knowledge Update
 Knowledge available in world (ws,as)∈PRS(E):
 tree0(ws,as) ... treek(ws,as)...
 tree0(ws,as)=(ws|ws|, ∅, ..., ∅),
 treek+1(ws,as)=(ws|ws|,
{treek(ws’,as’) | (ws’,as’) ∼ (ws,as)},
... {treek(ws’,as’) | (ws’,as’) ∼ (ws,as)}).
1
n
prs
prs
Bounded Knowledge Update
Knowledge update functions.
E, k ≥ 0, a ∈Act, i ∈[1..n].
 Gk
a
: Tk ×D  Tk;
 Hk,i
a
: Fk ×D  Fk;
 G0
a
(tr,w)=(w, ∅, … ∅) iff (root(tr),w)∈I(a);
 Hk,i
a
(U,w)={Gk
a
(tr,w’) | tr ∈U and w’ ∼ w};
 Gk+1
a
((w,U1, …,Un), w’)=
( w’, H1,i
a
(U1,w’), …, Hn,i
a
(Un,w’))
iff (w,w’) ∈ I(a).
i
Bounded Knowledge Update
 Knowledge acquisition in GN(100,4)
Bounded Knowledge Update
 Proposition 3
For every k ≥ 0, every a ∈Act,
every finite environment E,
every (ws,as)∈DPRS, and every w∈D,
the following incremental knowledge
update property holds:
treek((ws,as)°(w,a))=Gk
a
(treek(ws,a), w).
Bounded Knowledge Abstraction
 Translation TL: Act-CTL-Kn  Act+n
-CTL.
 T(Act+n
) = Act ∪ [1..n];
 T(Ki)=AXi, T(Si)=EXi ;
 T(ϕ)=ϕ+n
, ϕ ∈Act-CTL-Kn.
Translation TE : E  E+n
.
 TE((D, ∼, ... ∼, I, V))=(D, I+n
, V), where
I+n
(a)=I(a) for a ∈Act, I+n
(i)=∼ for i ∈[1..n].
1 n
i
Bounded Knowledge Abstraction
 Proposition 4
For every environment E and every
formula ϕ of Act-CTL-Kn:
E(ϕ) = E+n(ϕ+n)
In particular,
PRS(E)(ϕ) = (PRS(E))+n
(ϕ+n
).
Bounded Knowledge Abstraction
 Associated model based on k-trees for Act+n-
CTL TRk(E)=(Dk, Ik, Vk):
 Dk — the set of all 0-,..., k-trees over E;
 for a ∈Act: Ik(a) = { (tr', tr")∈Dk×Dk |
tr" = Gj
a
(tr',w) for some j ∈[0..k] and w∈D};
 for i ∈[1..n]: Ik(i) = { (tr', tr")∈Dk×Dk |
tr" ∈Ui, tr' = (w, U1,...Un) for some w∈D};
 Vk(p) = {tr | root(tr) ∈V(p) } for every p∈Prp.
Bounded Knowledge Abstraction
 Treek(P)={ treek(ws, as) | (ws, as) ∈ P};
 Trace(Pk)={ (ws, as) | treek(ws, as) ∈ Pk}.
 Proposition 5
For every n ≥1 and k ≥0, for every formula
ϕ ∈Act-CTL-Kn with the knowledge depth k
at most, and for every finite environment E,
the following holds:
Treek(PRS(E)(ϕ)) =TRk(E)(ϕ+n
),
PRS(E)(ϕ) = Trace(TRk(E)(ϕ+n
)).
Bounded Knowledge Abstraction
Action transition in TRk(E)
Bounded Knowledge Abstraction
Knoweledge transition in TRk(E)
Bounded Knowledge Abstraction
 Proposition 6
For every n ≥1 and k ≥0 and every environment E,
the model TRk(E) is an abstraction of the model
PRS(E)+n
with respect to formulas of Act+n
-CTL
which correspond to formulas of Act-CTL-Kn
with the knowledge depth k at most.
The corresponding abstraction function maps
every trace to the k-tree of this trace.
Bounded Knowledge Abstraction
 Proposition 7
For every integer n ≥1 and k ≥0, synchronous
environment with perfect recall PRS(E), every
formula ϕ of Act-CTL-Kn with the knowledge
depth k at most, the model checking problem
is decidable with the upper bound
f is the size of the formula,
d is the number of states in D.
Bounded Knowledge Abstraction
 Model checking algorithm:
 Input a formula ϕ of Act-CTL-Kn and
count its knowledge depth k.
 Convert ϕ into the corresponding formula
ϕ+n
of Act+n
-CTL.
 Input a finite environment E and
construct finite model TRk(E).
 Input a trace (ws, as) and build the
corresponding k-tree tr.
 Model check ϕ+n
on tr in TRk(E).
Bounded Knowledge Abstraction
 Model-checker has been implemented in C#.
 Data structures — vector-affine trees.
 Experiments with the Guess Numbers
Puzzle for various N (the maximal N =15).
 |E|=120000, 2|E|
≈ 4×1036123
.

Más contenido relacionado

La actualidad más candente

A new Perron-Frobenius theorem for nonnegative tensors
A new Perron-Frobenius theorem for nonnegative tensorsA new Perron-Frobenius theorem for nonnegative tensors
A new Perron-Frobenius theorem for nonnegative tensorsFrancesco Tudisco
 
Ece formula sheet
Ece formula sheetEce formula sheet
Ece formula sheetManasa Mona
 
Hybrid dynamics in large-scale logistics networks
Hybrid dynamics in large-scale logistics networksHybrid dynamics in large-scale logistics networks
Hybrid dynamics in large-scale logistics networksMKosmykov
 
Appendix to MLPI Lecture 2 - Monte Carlo Methods (Basics)
Appendix to MLPI Lecture 2 - Monte Carlo Methods (Basics)Appendix to MLPI Lecture 2 - Monte Carlo Methods (Basics)
Appendix to MLPI Lecture 2 - Monte Carlo Methods (Basics)Dahua Lin
 
Reduction of the small gain condition
Reduction of the small gain conditionReduction of the small gain condition
Reduction of the small gain conditionMKosmykov
 
Goodfellow, Bengio, Couville (2016) "Deep Learning", Chap. 7
Goodfellow, Bengio, Couville (2016) "Deep Learning", Chap. 7Goodfellow, Bengio, Couville (2016) "Deep Learning", Chap. 7
Goodfellow, Bengio, Couville (2016) "Deep Learning", Chap. 7Ono Shigeru
 
Goodfellow, Bengio, Couville (2016) "Deep Learning", Chap. 6
Goodfellow, Bengio, Couville (2016) "Deep Learning", Chap. 6Goodfellow, Bengio, Couville (2016) "Deep Learning", Chap. 6
Goodfellow, Bengio, Couville (2016) "Deep Learning", Chap. 6Ono Shigeru
 
Optimal L-shaped matrix reordering, aka graph's core-periphery
Optimal L-shaped matrix reordering, aka graph's core-peripheryOptimal L-shaped matrix reordering, aka graph's core-periphery
Optimal L-shaped matrix reordering, aka graph's core-peripheryFrancesco Tudisco
 
Approximate Nearest Neighbour in Higher Dimensions
Approximate Nearest Neighbour in Higher DimensionsApproximate Nearest Neighbour in Higher Dimensions
Approximate Nearest Neighbour in Higher DimensionsShrey Verma
 
GradStudentSeminarSept30
GradStudentSeminarSept30GradStudentSeminarSept30
GradStudentSeminarSept30Ryan White
 
Thesis defense improved
Thesis defense improvedThesis defense improved
Thesis defense improvedZheng Mengdi
 
Metrics for generativemodels
Metrics for generativemodelsMetrics for generativemodels
Metrics for generativemodelsDai-Hai Nguyen
 
Backstepping for Piecewise Affine Systems: A SOS Approach
Backstepping for Piecewise Affine Systems: A SOS ApproachBackstepping for Piecewise Affine Systems: A SOS Approach
Backstepping for Piecewise Affine Systems: A SOS ApproachBehzad Samadi
 

La actualidad más candente (20)

A new Perron-Frobenius theorem for nonnegative tensors
A new Perron-Frobenius theorem for nonnegative tensorsA new Perron-Frobenius theorem for nonnegative tensors
A new Perron-Frobenius theorem for nonnegative tensors
 
Ece formula sheet
Ece formula sheetEce formula sheet
Ece formula sheet
 
Hybrid dynamics in large-scale logistics networks
Hybrid dynamics in large-scale logistics networksHybrid dynamics in large-scale logistics networks
Hybrid dynamics in large-scale logistics networks
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Appendix to MLPI Lecture 2 - Monte Carlo Methods (Basics)
Appendix to MLPI Lecture 2 - Monte Carlo Methods (Basics)Appendix to MLPI Lecture 2 - Monte Carlo Methods (Basics)
Appendix to MLPI Lecture 2 - Monte Carlo Methods (Basics)
 
Icros2021 handout
Icros2021 handoutIcros2021 handout
Icros2021 handout
 
Reduction of the small gain condition
Reduction of the small gain conditionReduction of the small gain condition
Reduction of the small gain condition
 
Goodfellow, Bengio, Couville (2016) "Deep Learning", Chap. 7
Goodfellow, Bengio, Couville (2016) "Deep Learning", Chap. 7Goodfellow, Bengio, Couville (2016) "Deep Learning", Chap. 7
Goodfellow, Bengio, Couville (2016) "Deep Learning", Chap. 7
 
talk MCMC & SMC 2004
talk MCMC & SMC 2004talk MCMC & SMC 2004
talk MCMC & SMC 2004
 
Goodfellow, Bengio, Couville (2016) "Deep Learning", Chap. 6
Goodfellow, Bengio, Couville (2016) "Deep Learning", Chap. 6Goodfellow, Bengio, Couville (2016) "Deep Learning", Chap. 6
Goodfellow, Bengio, Couville (2016) "Deep Learning", Chap. 6
 
Optimal L-shaped matrix reordering, aka graph's core-periphery
Optimal L-shaped matrix reordering, aka graph's core-peripheryOptimal L-shaped matrix reordering, aka graph's core-periphery
Optimal L-shaped matrix reordering, aka graph's core-periphery
 
Approximate Nearest Neighbour in Higher Dimensions
Approximate Nearest Neighbour in Higher DimensionsApproximate Nearest Neighbour in Higher Dimensions
Approximate Nearest Neighbour in Higher Dimensions
 
GradStudentSeminarSept30
GradStudentSeminarSept30GradStudentSeminarSept30
GradStudentSeminarSept30
 
Introduction to logistic regression
Introduction to logistic regressionIntroduction to logistic regression
Introduction to logistic regression
 
Bayesian computation with INLA
Bayesian computation with INLABayesian computation with INLA
Bayesian computation with INLA
 
Thesis defense improved
Thesis defense improvedThesis defense improved
Thesis defense improved
 
Metrics for generativemodels
Metrics for generativemodelsMetrics for generativemodels
Metrics for generativemodels
 
HPWFcorePRES--FUR2016
HPWFcorePRES--FUR2016HPWFcorePRES--FUR2016
HPWFcorePRES--FUR2016
 
Chapter 23 aoa
Chapter 23 aoaChapter 23 aoa
Chapter 23 aoa
 
Backstepping for Piecewise Affine Systems: A SOS Approach
Backstepping for Piecewise Affine Systems: A SOS ApproachBackstepping for Piecewise Affine Systems: A SOS Approach
Backstepping for Piecewise Affine Systems: A SOS Approach
 

Similar a Nikolay Shilov. CSEDays 3

Efficient Analysis of high-dimensional data in tensor formats
Efficient Analysis of high-dimensional data in tensor formatsEfficient Analysis of high-dimensional data in tensor formats
Efficient Analysis of high-dimensional data in tensor formatsAlexander Litvinenko
 
Relaxation methods for the matrix exponential on large networks
Relaxation methods for the matrix exponential on large networksRelaxation methods for the matrix exponential on large networks
Relaxation methods for the matrix exponential on large networksDavid Gleich
 
Some properties of m sequences over finite field fp
Some properties of m sequences over finite field fpSome properties of m sequences over finite field fp
Some properties of m sequences over finite field fpIAEME Publication
 
Backpropagation in Convolutional Neural Network
Backpropagation in Convolutional Neural NetworkBackpropagation in Convolutional Neural Network
Backpropagation in Convolutional Neural NetworkHiroshi Kuwajima
 
Probability Formula sheet
Probability Formula sheetProbability Formula sheet
Probability Formula sheetHaris Hassan
 
Multiclass Logistic Regression: Derivation and Apache Spark Examples
Multiclass Logistic Regression: Derivation and Apache Spark ExamplesMulticlass Logistic Regression: Derivation and Apache Spark Examples
Multiclass Logistic Regression: Derivation and Apache Spark ExamplesMarjan Sterjev
 
Artificial Neural Network
Artificial Neural Network Artificial Neural Network
Artificial Neural Network Iman Ardekani
 
Algorithms and Complexity: Cryptography Theory
Algorithms and Complexity: Cryptography TheoryAlgorithms and Complexity: Cryptography Theory
Algorithms and Complexity: Cryptography TheoryAlex Prut
 
Neural Network
Neural NetworkNeural Network
Neural Networksamisounda
 
[DL輪読会]Conditional Neural Processes
[DL輪読会]Conditional Neural Processes[DL輪読会]Conditional Neural Processes
[DL輪読会]Conditional Neural ProcessesDeep Learning JP
 
Conditional neural processes
Conditional neural processesConditional neural processes
Conditional neural processesKazuki Fujikawa
 
1- The longest common substring of two words w 1 --w n and v 1 ---v m.docx
1- The longest common substring of two words w 1 --w n and v 1 ---v m.docx1- The longest common substring of two words w 1 --w n and v 1 ---v m.docx
1- The longest common substring of two words w 1 --w n and v 1 ---v m.docxtodd991
 
Simplified Runtime Analysis of Estimation of Distribution Algorithms
Simplified Runtime Analysis of Estimation of Distribution AlgorithmsSimplified Runtime Analysis of Estimation of Distribution Algorithms
Simplified Runtime Analysis of Estimation of Distribution AlgorithmsPer Kristian Lehre
 

Similar a Nikolay Shilov. CSEDays 3 (20)

lect26-em.ppt
lect26-em.pptlect26-em.ppt
lect26-em.ppt
 
Disjoint sets
Disjoint setsDisjoint sets
Disjoint sets
 
Ijetr021233
Ijetr021233Ijetr021233
Ijetr021233
 
Efficient Analysis of high-dimensional data in tensor formats
Efficient Analysis of high-dimensional data in tensor formatsEfficient Analysis of high-dimensional data in tensor formats
Efficient Analysis of high-dimensional data in tensor formats
 
Relaxation methods for the matrix exponential on large networks
Relaxation methods for the matrix exponential on large networksRelaxation methods for the matrix exponential on large networks
Relaxation methods for the matrix exponential on large networks
 
Some properties of m sequences over finite field fp
Some properties of m sequences over finite field fpSome properties of m sequences over finite field fp
Some properties of m sequences over finite field fp
 
Backpropagation in Convolutional Neural Network
Backpropagation in Convolutional Neural NetworkBackpropagation in Convolutional Neural Network
Backpropagation in Convolutional Neural Network
 
Probability Formula sheet
Probability Formula sheetProbability Formula sheet
Probability Formula sheet
 
Multiclass Logistic Regression: Derivation and Apache Spark Examples
Multiclass Logistic Regression: Derivation and Apache Spark ExamplesMulticlass Logistic Regression: Derivation and Apache Spark Examples
Multiclass Logistic Regression: Derivation and Apache Spark Examples
 
Technical
TechnicalTechnical
Technical
 
Artificial Neural Network
Artificial Neural Network Artificial Neural Network
Artificial Neural Network
 
Algorithms and Complexity: Cryptography Theory
Algorithms and Complexity: Cryptography TheoryAlgorithms and Complexity: Cryptography Theory
Algorithms and Complexity: Cryptography Theory
 
Neural Network
Neural NetworkNeural Network
Neural Network
 
Daa chapter7
Daa chapter7Daa chapter7
Daa chapter7
 
[DL輪読会]Conditional Neural Processes
[DL輪読会]Conditional Neural Processes[DL輪読会]Conditional Neural Processes
[DL輪読会]Conditional Neural Processes
 
Conditional neural processes
Conditional neural processesConditional neural processes
Conditional neural processes
 
Approximate Tree Kernels
Approximate Tree KernelsApproximate Tree Kernels
Approximate Tree Kernels
 
1- The longest common substring of two words w 1 --w n and v 1 ---v m.docx
1- The longest common substring of two words w 1 --w n and v 1 ---v m.docx1- The longest common substring of two words w 1 --w n and v 1 ---v m.docx
1- The longest common substring of two words w 1 --w n and v 1 ---v m.docx
 
Simplified Runtime Analysis of Estimation of Distribution Algorithms
Simplified Runtime Analysis of Estimation of Distribution AlgorithmsSimplified Runtime Analysis of Estimation of Distribution Algorithms
Simplified Runtime Analysis of Estimation of Distribution Algorithms
 
Dsp Lab Record
Dsp Lab RecordDsp Lab Record
Dsp Lab Record
 

Más de LiloSEA

CSEDays. Олег Ушмаев
CSEDays. Олег УшмаевCSEDays. Олег Ушмаев
CSEDays. Олег УшмаевLiloSEA
 
CSEDays. Алексей Кадиев
CSEDays. Алексей КадиевCSEDays. Алексей Кадиев
CSEDays. Алексей КадиевLiloSEA
 
CSEDays. Юрий Айдаров
CSEDays. Юрий АйдаровCSEDays. Юрий Айдаров
CSEDays. Юрий АйдаровLiloSEA
 
CSEDays. Александр Семенов
CSEDays. Александр СеменовCSEDays. Александр Семенов
CSEDays. Александр СеменовLiloSEA
 
Александра Торгашова
Александра ТоргашоваАлександра Торгашова
Александра ТоргашоваLiloSEA
 
Степан Петухов
Степан ПетуховСтепан Петухов
Степан ПетуховLiloSEA
 
Лукина Ольга. Безопасность в соц. сетях
Лукина Ольга. Безопасность в соц. сетяхЛукина Ольга. Безопасность в соц. сетях
Лукина Ольга. Безопасность в соц. сетяхLiloSEA
 
Андрей Лабунец. Механизмы трассировки
Андрей Лабунец. Механизмы трассировкиАндрей Лабунец. Механизмы трассировки
Андрей Лабунец. Механизмы трассировкиLiloSEA
 
Андрей Гаража. Биоинформатика
Андрей Гаража. БиоинформатикаАндрей Гаража. Биоинформатика
Андрей Гаража. БиоинформатикаLiloSEA
 
Александр Тиморин. Мошеннические атаки
Александр Тиморин. Мошеннические атакиАлександр Тиморин. Мошеннические атаки
Александр Тиморин. Мошеннические атакиLiloSEA
 
Михаил Рыбалкин. Перестановочные многочлены.
Михаил Рыбалкин. Перестановочные многочлены.Михаил Рыбалкин. Перестановочные многочлены.
Михаил Рыбалкин. Перестановочные многочлены.LiloSEA
 
Cse коновалова титов
Cse коновалова титовCse коновалова титов
Cse коновалова титовLiloSEA
 
схемы разделения секрета
схемы разделения секретасхемы разделения секрета
схемы разделения секретаLiloSEA
 
почти пороговая схема разделения секрета
почти пороговая схема разделения секретапочти пороговая схема разделения секрета
почти пороговая схема разделения секретаLiloSEA
 
Алексей Голдбергс. Криптография для бизнеса
Алексей Голдбергс. Криптография для бизнесаАлексей Голдбергс. Криптография для бизнеса
Алексей Голдбергс. Криптография для бизнесаLiloSEA
 
Hash cse lecture3
Hash cse lecture3Hash cse lecture3
Hash cse lecture3LiloSEA
 
Hash cse lecture1
Hash cse lecture1Hash cse lecture1
Hash cse lecture1LiloSEA
 
Hash cse lecture2
Hash cse lecture2Hash cse lecture2
Hash cse lecture2LiloSEA
 
Simonova sql server-enginetesting
Simonova sql server-enginetestingSimonova sql server-enginetesting
Simonova sql server-enginetestingLiloSEA
 
Simonova CSEDays
Simonova CSEDaysSimonova CSEDays
Simonova CSEDaysLiloSEA
 

Más de LiloSEA (20)

CSEDays. Олег Ушмаев
CSEDays. Олег УшмаевCSEDays. Олег Ушмаев
CSEDays. Олег Ушмаев
 
CSEDays. Алексей Кадиев
CSEDays. Алексей КадиевCSEDays. Алексей Кадиев
CSEDays. Алексей Кадиев
 
CSEDays. Юрий Айдаров
CSEDays. Юрий АйдаровCSEDays. Юрий Айдаров
CSEDays. Юрий Айдаров
 
CSEDays. Александр Семенов
CSEDays. Александр СеменовCSEDays. Александр Семенов
CSEDays. Александр Семенов
 
Александра Торгашова
Александра ТоргашоваАлександра Торгашова
Александра Торгашова
 
Степан Петухов
Степан ПетуховСтепан Петухов
Степан Петухов
 
Лукина Ольга. Безопасность в соц. сетях
Лукина Ольга. Безопасность в соц. сетяхЛукина Ольга. Безопасность в соц. сетях
Лукина Ольга. Безопасность в соц. сетях
 
Андрей Лабунец. Механизмы трассировки
Андрей Лабунец. Механизмы трассировкиАндрей Лабунец. Механизмы трассировки
Андрей Лабунец. Механизмы трассировки
 
Андрей Гаража. Биоинформатика
Андрей Гаража. БиоинформатикаАндрей Гаража. Биоинформатика
Андрей Гаража. Биоинформатика
 
Александр Тиморин. Мошеннические атаки
Александр Тиморин. Мошеннические атакиАлександр Тиморин. Мошеннические атаки
Александр Тиморин. Мошеннические атаки
 
Михаил Рыбалкин. Перестановочные многочлены.
Михаил Рыбалкин. Перестановочные многочлены.Михаил Рыбалкин. Перестановочные многочлены.
Михаил Рыбалкин. Перестановочные многочлены.
 
Cse коновалова титов
Cse коновалова титовCse коновалова титов
Cse коновалова титов
 
схемы разделения секрета
схемы разделения секретасхемы разделения секрета
схемы разделения секрета
 
почти пороговая схема разделения секрета
почти пороговая схема разделения секретапочти пороговая схема разделения секрета
почти пороговая схема разделения секрета
 
Алексей Голдбергс. Криптография для бизнеса
Алексей Голдбергс. Криптография для бизнесаАлексей Голдбергс. Криптография для бизнеса
Алексей Голдбергс. Криптография для бизнеса
 
Hash cse lecture3
Hash cse lecture3Hash cse lecture3
Hash cse lecture3
 
Hash cse lecture1
Hash cse lecture1Hash cse lecture1
Hash cse lecture1
 
Hash cse lecture2
Hash cse lecture2Hash cse lecture2
Hash cse lecture2
 
Simonova sql server-enginetesting
Simonova sql server-enginetestingSimonova sql server-enginetesting
Simonova sql server-enginetesting
 
Simonova CSEDays
Simonova CSEDaysSimonova CSEDays
Simonova CSEDays
 

Nikolay Shilov. CSEDays 3

  • 1. Update and Abstraction in Model Checking of Knowledge and Branching Time N.V.Shilov, N.O.Garanina
  • 2. Introduction Combinations of traditional program logics with logics of knowledge for reasoning about multiagent systems. The model checking problem in perfect recall trace-based environments for pairwise fusion of the logics:
  • 3. Introduction Program logics  Elementary Propositional Dynamic Logic (EPDL)  Computation Tree Logic with actions (Act-CTL)  The propositional µ-Calculus (µC) with epistemic logics  Propositional Logic of Knowledge (PLK)  Propositional Logic of Common Knowledge (PLC)
  • 4. Introduction This model checking problem  is PSPACE-complete for EPDL-C,  is non-elementary decidable for Act-CTL-K,  is undecidable for Act-CTL-C, µPLK and µPLC.
  • 5. Introduction Update+abstraction algorithm for model checking Act-CTL-K in perfect recall synchronous settings. Parameters of algorithm complexity:  number of agents,  number of states,  knowledge depth,  formula size.
  • 6. Introduction We define:  the knowledge depth for formulas of Act-CTL-Kn,  sublogics Act-CTL-Kk n with a bounded knowledge depth k ≥ 0,  k-trees,  knowledge update function Gk a on k- trees for every action a.
  • 7. Introduction We suggest:  an algorithm that transforms Act-CTL-Kk n into Act+n -CTL,  k-trees + update functions → finite Kripke structure ↔ original perfect recall environment,  the resulting model checking algorithm solves Act+n-CTL on k-trees.
  • 8. Background Logics Syntax:  true, false — Boolean constants,  Prp — propositional variables,  Rlt — relational symbols,  ¬, ∧, ∨ and some modalities.
  • 9. Background Logics Kripke structure is a triple (DM,IM,VM), where  the domain DM — a nonempty set of possible worlds,  the interpretation IM: Rlt  2DM×DM,  the valuation VM: Prp  DM.
  • 10. Background Logics Semantics:  w=Mtrue and w=Mfalse,  w=M p iff w∈VM(p) for p∈Prop,  w=M ¬ϕ iff w=M ϕ,  w=M ϕ ∧ ψ iff w=M ϕ and w=M ψ,  w=M ϕ ∨ ψ iff w=M ϕ or w=M ψ,  definition of modalities is specific.
  • 11. Background Logics Propositional Logic of Knowledge PLK:  Alphabet of relational symbols — [1..n].  Syntax: Ki ϕ and Si ϕ, i ∈[1..n] and ϕ — a formula.  Interpretation IM(i) is an equivalence.  (DM, ∼,… ∼, VM) with IM(i) = ∼. 1 n i
  • 12. Background Logics Semantics:  w=MSi ϕ iff for some w’: w ∼ w’ and w’=M ϕ,  w=MKi ϕ iff for every w’: w ∼ w’ implies w’=M ϕ. i i
  • 13. Background Logics Computational Tree Logic with Actions Act-CTL:  Alphabet of relational symbols — action symbols Act.  Syntax: AXa ϕ, EXa ϕ, AGa ϕ, AFa ϕ, EGa ϕ, EFa ϕ, AϕUa ψ, and EϕUa ψ.  a-trace — (w1 ... wj wj+1 ...) with (wj,wj+1)∈IM(a) for every j.  a-run — a maximal a-trace.
  • 14. Background Logics Semantics:  w=M AXa ϕ iff ws2=Mϕ for every a-run ws ∈DM* with ws1=w,  w=MAGa ϕ iff wsj=M ϕ for every a-run ws ∈DM* with ws1=w and every 1≤j≤|ws|,  w=MAFa ϕ iff wsj=M ϕ for every a-run ws ∈DM* with ws1=w and some 1≤j≤|ws|,
  • 15. Background Logics Semantics:  w=MA(ϕ Ua ψ) iff wsj=M ϕ and wsk=M ψ for every a-run ws ∈DM* with ws1=w, for some 1≤k≤|ws| and every 1≤j<k.
  • 16. Combining Knowledge and Branching Time Computational Tree Logic with Actions and Knowledge Act-CTL-Kn:  [1..n] — set of agents (n > 0),  Act — action symbols.  Syntax: — true (false), Prp, ¬, ∧, ∨, — knowledge modalities Ki and Si for i ∈ [1..n], — branching-time constructs for a ∈Act AXa , EXa , AGa , AFa , EGa , EFa , AUa , EUa .
  • 17. Combining Knowledge and Branching Time  An environment is a tuple E =(D, ∼, … ∼, I, V) with (D, ∼, … ∼, V) — a model for PLKn and (D, I, V) — a model for Act-CTL.  E (ϕ) = { w | w= ϕ}. 1 n 1 n
  • 18. Combining Knowledge and Branching Time A trace-based Perfect Recall Synchronous environment PRS(E) = (DPRS, ∼, … ∼, IPRS,VPRS):  DPRS is the set of all pairs (ws, as), where ws ∈D+ , as ∈Act* , |ws| = |as|+1, and (wsj, wsj+1) ∈I(asj) for every j∈[1..|as|];  for every p ∈Prp and (ws,as) ∈DPRS, (ws,as) ∈VPRS (p) iff ws|ws|∈VE (p); n1 prsprs
  • 19. Combining Knowledge and Branching Time  for every i ∈ [1..n] and for all (ws',as'), (ws",as")∈DPRS, (ws',as') ∼ (ws",as") iff as'=as" and ws‘j ∼ws"j for every j∈[1..|ws'|]; w’1 → w’2 → … → w’m-1 → w’m i prs w”1 → w”2 → … → w”m-1 → w”m prs i i a1 a2 am-2 am-1 a1 a2 am-2 am-1
  • 20. Combining Knowledge and Branching Time  for every i ∈ [1..n] and for all (ws',as'), (ws",as")∈DPRS, (ws',as') ∼ (ws",as") iff as'=as" and ws‘j ∼ws"j for every j∈[1..|ws'|]; w’1 → w’2 → … → w’m-1 → w’m i w”1 → w”2 → … → w”m-1 → w”m prs i i a1 a2 am-2 am-1 a1 a2 am-2 am-1
  • 21. Combining Knowledge and Branching Time  for every i ∈ [1..n] and for all (ws',as'), (ws",as")∈DPRS, (ws',as') ∼ (ws",as") iff as'=as" and ws‘j ∼ws"j for every j∈[1..|ws'|]; w’1 → w’2 → … → w’m-1 → w’m i w”1 → w”2 → … → w”m-1 → w”m prs i i a1 a2 am-2 am-1 a1 a2 am-2 am-1
  • 22. Combining Knowledge and Branching Time  for every i ∈ [1..n] and for all (ws',as'), (ws",as")∈DPRS, (ws',as') ∼ (ws",as") iff as'=as" and ws‘j ∼ws"j for every j∈[1..|ws'|]; w’1 → w’2 → … → w’m-1 → w’m i w”1 → w”2 → … → w”m-1 → w”m prs i i a1 a2 am-2 am-1 a1 a2 am-2 am-1
  • 23. Combining Knowledge and Branching Time  for every i ∈ [1..n] and for all (ws',as'), (ws",as")∈DPRS, (ws',as') ∼ (ws",as") iff as'=as" and ws‘j ∼ws"j for every j∈[1..|ws'|]; w’1 → w’2 → … → w’m-1 → w’m i w”1 → w”2 → … → w”m-1 → w”m prs i i a1 a2 am-2 am-1 a1 a2 am-2 am-1
  • 24. Combining Knowledge and Branching Time  for every a ∈Act and for all (ws',as'), (ws",as")∈DPRS, ((ws',as'), (ws",as")) ∈ IPRS(a) iff as'°a =as", ws" = ws'°w", and (w', w")∈IE(a), where w'=ws’|ws’|; w’1 → w’2 → … → w’m-1 → w’m a↓ w’1 → w’2 → … → w’m-1 → w’m → w” a1 a2 am-2 am-1 a1 a2 am-2 am-1 a
  • 25. Combining Knowledge and Branching Time  for every a ∈Act and for all (ws',as'), (ws",as")∈DPRS, ((ws',as'), (ws",as")) ∈ IPRS(a) iff as'°a =as", ws" = ws'°w", and (w', w")∈IE(a), where w'=ws’|ws’|.
  • 26. Combining Knowledge and Branching Time Example. Guess Numbers Puzzle GNP(N,M) (N,M ≥ 0):  Orbiter — referee, Eloise and Abelard — two players.  Abelard selects a hidden number h∈[1..N];  Abelard never reports the hidden value to Eloise.
  • 27. Combining Knowledge and Branching Time  Eloise selects an initial value s∈[1..N] for a personal counter;  Eloise can increase or decrease counter value by 10, 5 or 1 while in the range [1..N];  Eloise never reports the counter values to Alelard.
  • 28. Combining Knowledge and Branching Time  Orbiter reports to both players whether the new value of the personal counter s is less, equal, or greater then the hidden number h.  Can Eloise and Abelard simultaneously learn the hidden value h and the initial value s respectively after M arithmetic steps?
  • 29. Combining Knowledge and Branching Time  Two agents in the puzzle — E (Eloise) and A (Abelard).  Space D=[0..N]×[1..N]×{<, >, =, out, ini}×[1..N]:  [0..N] — an auxiliary counter c,  [1..N] — values of the personal counter s,  {<, >, =, out, ini } — results of comparisons,  [1..N] — the hidden value h.  Actions — (σn), for σ ∈{+,-} and n ∈{1,5,10}.
  • 30. Combining Knowledge and Branching Time Knowledge Acquisition.
  • 31. Combining Knowledge and Branching Time  Agent E can get knowledge about the hidden value from a sequence of states that finishes with a state with equality sign.  Agent A can get knowledge about the initial value from a sequence of operations that generates these sequence of states.
  • 32. Combining Knowledge and Branching Time  next = ∪(σn), σ∈{+,-}, n ∈{1,5,10} EFnext((c ≤M) ∧ ∨h ∈[1..N]KE (hidden value is h) ∧ ∨s ∈ [1..N]KA (initial value is s))
  • 33. Bounded Knowledge Update  The model checking problem for Act-CTL-Kn in perfect recall synchronous environments is decidability and complexity of the set CHECK(Act-CTL-Kn) ≡ {(E, (ws,as), ϕ) | E — a finite environment, (ws,as) ∈ DPRS, ϕ — a formula of Act-CTL-Kn, (ws,as)=PRS ϕ }.
  • 34. Bounded Knowledge Update Complexity parameters:  E = (D, ∼,... ∼, I, V) — a finite environment,  d — the number of worlds in D;  r — the number of edges in E;  m = (d +r);  l (ws,as)=|ws|;  fϕ — the size of ϕ ∈Act-CTL-Kn.  Overall complexity — t =(m +l (ws,as) +fϕ). 1 n
  • 35. Bounded Knowledge Update  Proposition 1 For all n >1 and Act ≠Ø CHECK(Act-CTL-Kn) is decidable with lower bound 22 …2 }O(t) , where t is the overall complexity of the input.
  • 36. Bounded Knowledge Update  The knowledge depth of a formula is the maximal nesting of knowledge operators in that formula.  Act-CTL-Kk n — logics with a bounded knowledge depth k ≥0.  Act-CTL-Kn = ∪k ≥0 Act-CTL-Kk n.
  • 37. Bounded Knowledge Update  Tk — k-trees over E,  Fk — forests of k-trees over E (k ≥0).  T0 ={(w, ∅,... ∅) | w∈D, the number of copies of emptyset — n},  Fk=2Tk,  Tk+1={(w,U1,...Un) | w∈D and Ui ∈Fk, i ∈[1..n]},  T = ∪k ≥0Tk .
  • 39. Bounded Knowledge Update  k-tree — finite tree of height k, • vertices — worlds of the environment, • edges — agents;  In a tuple (w, U1, ... Un) • world w — actual state of the universe, • the set Ui — knowledge of the agent i;  0-tree: (w, ∅,... ∅) — world w;  1-tree: Ui — knowledge about the universe;  k-tree: Ui — knowledge about the universe and knowledge of the other agents.
  • 40. Bounded Knowledge Update  Proposition 2 Let k ≥ 0 be an integer and E be a finite environment for n agents with d states. Then  the number CK of k-trees over E CK ≤ exp(n ×d,k)/n;  if n <d, then the number NK of nodes in every k+1-tree over E NK < (CK)2 .
  • 41. Bounded Knowledge Update  Knowledge available in world (ws,as)∈PRS(E):  tree0(ws,as) ... treek(ws,as)...  tree0(ws,as)=(ws|ws|, ∅, ..., ∅),  treek+1(ws,as)=(ws|ws|, {treek(ws’,as’) | (ws’,as’) ∼ (ws,as)}, ... {treek(ws’,as’) | (ws’,as’) ∼ (ws,as)}). 1 n prs prs
  • 42. Bounded Knowledge Update Knowledge update functions. E, k ≥ 0, a ∈Act, i ∈[1..n].  Gk a : Tk ×D  Tk;  Hk,i a : Fk ×D  Fk;  G0 a (tr,w)=(w, ∅, … ∅) iff (root(tr),w)∈I(a);  Hk,i a (U,w)={Gk a (tr,w’) | tr ∈U and w’ ∼ w};  Gk+1 a ((w,U1, …,Un), w’)= ( w’, H1,i a (U1,w’), …, Hn,i a (Un,w’)) iff (w,w’) ∈ I(a). i
  • 43. Bounded Knowledge Update  Knowledge acquisition in GN(100,4)
  • 44. Bounded Knowledge Update  Proposition 3 For every k ≥ 0, every a ∈Act, every finite environment E, every (ws,as)∈DPRS, and every w∈D, the following incremental knowledge update property holds: treek((ws,as)°(w,a))=Gk a (treek(ws,a), w).
  • 45. Bounded Knowledge Abstraction  Translation TL: Act-CTL-Kn  Act+n -CTL.  T(Act+n ) = Act ∪ [1..n];  T(Ki)=AXi, T(Si)=EXi ;  T(ϕ)=ϕ+n , ϕ ∈Act-CTL-Kn. Translation TE : E  E+n .  TE((D, ∼, ... ∼, I, V))=(D, I+n , V), where I+n (a)=I(a) for a ∈Act, I+n (i)=∼ for i ∈[1..n]. 1 n i
  • 46. Bounded Knowledge Abstraction  Proposition 4 For every environment E and every formula ϕ of Act-CTL-Kn: E(ϕ) = E+n(ϕ+n) In particular, PRS(E)(ϕ) = (PRS(E))+n (ϕ+n ).
  • 47. Bounded Knowledge Abstraction  Associated model based on k-trees for Act+n- CTL TRk(E)=(Dk, Ik, Vk):  Dk — the set of all 0-,..., k-trees over E;  for a ∈Act: Ik(a) = { (tr', tr")∈Dk×Dk | tr" = Gj a (tr',w) for some j ∈[0..k] and w∈D};  for i ∈[1..n]: Ik(i) = { (tr', tr")∈Dk×Dk | tr" ∈Ui, tr' = (w, U1,...Un) for some w∈D};  Vk(p) = {tr | root(tr) ∈V(p) } for every p∈Prp.
  • 48. Bounded Knowledge Abstraction  Treek(P)={ treek(ws, as) | (ws, as) ∈ P};  Trace(Pk)={ (ws, as) | treek(ws, as) ∈ Pk}.  Proposition 5 For every n ≥1 and k ≥0, for every formula ϕ ∈Act-CTL-Kn with the knowledge depth k at most, and for every finite environment E, the following holds: Treek(PRS(E)(ϕ)) =TRk(E)(ϕ+n ), PRS(E)(ϕ) = Trace(TRk(E)(ϕ+n )).
  • 49. Bounded Knowledge Abstraction Action transition in TRk(E)
  • 51. Bounded Knowledge Abstraction  Proposition 6 For every n ≥1 and k ≥0 and every environment E, the model TRk(E) is an abstraction of the model PRS(E)+n with respect to formulas of Act+n -CTL which correspond to formulas of Act-CTL-Kn with the knowledge depth k at most. The corresponding abstraction function maps every trace to the k-tree of this trace.
  • 52. Bounded Knowledge Abstraction  Proposition 7 For every integer n ≥1 and k ≥0, synchronous environment with perfect recall PRS(E), every formula ϕ of Act-CTL-Kn with the knowledge depth k at most, the model checking problem is decidable with the upper bound f is the size of the formula, d is the number of states in D.
  • 53. Bounded Knowledge Abstraction  Model checking algorithm:  Input a formula ϕ of Act-CTL-Kn and count its knowledge depth k.  Convert ϕ into the corresponding formula ϕ+n of Act+n -CTL.  Input a finite environment E and construct finite model TRk(E).  Input a trace (ws, as) and build the corresponding k-tree tr.  Model check ϕ+n on tr in TRk(E).
  • 54. Bounded Knowledge Abstraction  Model-checker has been implemented in C#.  Data structures — vector-affine trees.  Experiments with the Guess Numbers Puzzle for various N (the maximal N =15).  |E|=120000, 2|E| ≈ 4×1036123 .