Handout for the course Abstract Argumentation and Interfaces to Argumentative Reasoning

Federico Cerutti
Federico CeruttiRita Levi-Montalcini Research Fellow en University of Brescia

Abstract: Introduction to abstract argumentation and semantics, signatures and decomposability. Complexity of reasoning problems on an abstract argumentation framework and state-of-the-art solvers. Graphical interfaces to reasoning problems and natural language interfaces, with introduction on Natural Language Generation. Course held as part of The Second Summer School on Argumentation: Computational and Linguistic Perspectives September 2016 http://ssa2016.west.uni-koblenz.de/

Abstract Argumentation
and
Interfaces to Argumentative
Reasoning
Handouts
Federico Cerutti
September 2016
Contents
Contents 1
1 Dung’s AF 3
1.1 Principles for Extension-based Semantics: [BG07] . . . . . 3
1.2 Acceptability of Arguments [PV02; BG09a] . . . . . . . . . . 4
1.3 (Some) Semantics [Dun95] . . . . . . . . . . . . . . . . . . . . 5
1.4 Labelling-Based Semantics Representation [Cam06] . . . . 6
1.5 Skepticism Relationships [BG09b] . . . . . . . . . . . . . . . 9
1.6 Signatures [Dun+14] . . . . . . . . . . . . . . . . . . . . . . . 9
1.7 Decomposability and Transparancy [Bar+14] . . . . . . . . 12
1.8 Extension-based I/O Characterisation [GLW16] . . . . . . . 13
2 Implementations 14
2.1 Ad Hoc Procedures . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2 Constraint Satisfaction Programming . . . . . . . . . . . . . 14
2.3 Answer Set Programming . . . . . . . . . . . . . . . . . . . . 15
2.4 Propositional Satisfiability Problems . . . . . . . . . . . . . 15
2.5 Second-order Solver [BJT16] . . . . . . . . . . . . . . . . . . 23
2.6 Which One? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3 Ranking-Based Semantics 28
3.1 The Categoriser Semantics [BH01] . . . . . . . . . . . . . . . 28
3.2 Properties for Ranking-Based Semantics [Bon+16] . . . . . 28
4 Argumentation Schemes 33
4.1 An example: Walton et al. ’s Argumentation Schemes for
Practical Reasoning . . . . . . . . . . . . . . . . . . . . . . . . 33
4.2 AS and Dialogues . . . . . . . . . . . . . . . . . . . . . . . . . 34
5 Semantic Web Argumentation 38
5.1 AIF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.2 AIF-OWL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
6 CISpaces 43
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
6.2 Intelligence Analysis . . . . . . . . . . . . . . . . . . . . . . . 43
6.3 Reasoning with Evidence . . . . . . . . . . . . . . . . . . . . . 46
6.4 Arguments for Sensemaking . . . . . . . . . . . . . . . . . . 46
6.5 Arguments for Provenance . . . . . . . . . . . . . . . . . . . . 48
Cardiff University, 2016 Page 1
7 Natural Language Interfaces 50
7.1 Experiments with Humans: Scenarios [CTO14] . . . . . . . 50
7.2 Lessons From Argument Mining: [BR11] . . . . . . . . . . . 55
Bibliography 56
Cardiff University, 2016 Page 2
1 Dung’s Argumentation
Framework
Acknowledgement
This handout include material from a number of collaborators including
Pietro Baroni, Massimiliano Giacomin, Thomas Linsbichler, and Stefan5
Woltran.
Definition 1 ([Dun95]). A Dung argumentation framework AF is a pair
〈A ,→ 〉
where A is a set of arguments, and → is a binary relation on A i.e. →⊆
A ×A . ♠
An argumentation framework has an obvious representation as a di-10
rected graph where the nodes are arguments and the edges are drawn
from attacking to attacked arguments.
The set of attackers of an argument a1 will be denoted as a−
1 {a2 :
a2 → a1}, the set of arguments attacked by a1 will be denoted as a+
1 {a2 :
a1 → a2}. We also extend these notations to sets of arguments, i.e. given15
E ⊆ A , E−
{a2 | ∃a1 ∈ E,a2 → a1} and E+
{a2 | ∃a1 ∈ E,a1 → a2}.
With a little abuse of notation we define S → a ≡ ∃a ∈ S : a → b. Simi-
larly, b → S ≡ ∃a ∈ S : b → a.
Given Γ = 〈A ,→〉 and Γ = 〈A ,→ 〉, Γ∪Γ = 〈A ∪A ,→ ∪ → 〉.
1.1 Principles for Extension-based Semantics:20
[BG07]
Definition 2.
Given an argumentation framework AF = 〈A ,→ 〉, a set
S ⊆ A is D-conflict-free, denoted as D-cf(S), if and only if a,b ∈ S such
that a → b. A semantics σ satisfies the D-conflict-free principle if and only
if ∀AF,∀E ∈ Eσ(AF) E is D-conflict-free . ♠25
Definition 3. Given an argumentation framework AF = 〈A ,→ 〉, an ar-
gument a ∈ A is D-acceptable w.r.t. a set S ⊆ A if and only if ∀b ∈ A
b → a ⇒ S → b.
The function FAF : 2A
→ 2A
which, given a set S ⊆ A , returns the
set of the D-acceptable arguments w.r.t. S, is called the D-characteristic30
function of AF. ♠
Cardiff University, 2016 Page 3
Dung’s AF • Acceptability of Arguments [PV02; BG09a]
Definition 4. Given an argumentation framework AF = 〈A ,→ 〉, a set
S ⊆ A is D-admissible (S ∈ AS (AF)) if and only if D-cf(S) and ∀a ∈ S
a is D-acceptable w.r.t. S. The set of all the D-admissible sets of AF is
denoted as AS (AF). ♠
Dσ = {AF|Eσ(AF) = }5
Definition 5.
A semantics σ satisfies the D-admissibility principle if and
only if ∀AF ∈ Dσ Eσ(AF) ⊆ AS (AF), namely ∀E ∈ Eσ(AF) it holds that:
a ∈ E ⇒ (∀b ∈ A ,b → a ⇒ E → b). ♠
Definition 6. Given an argumentation framework AF = 〈A ,→ 〉, a ∈
A and S ⊆ A , we say that a is D-strongly-defended by S (denoted as
D-sd(a,S)) iff ∀b ∈ A , b → a, ∃c ∈ S {a} : c → b and D-sd(c,S {a}). ♠
Definition 7.
A semantics σ satisfies the D-strongly admissibility prin-
ciple if and only if ∀AF ∈ Dσ, ∀E ∈ Eσ(AF) it holds that
a ∈ E ⊃ D-sd(a,E) ♠
Definition 8.
A semantics σ satisfies the D-reinstatement principle if and
only if ∀AF ∈ Dσ, ∀E ∈ Eσ(AF) it holds that:
(∀b ∈ A ,b → a ⇒ E → b) ⇒ a ∈ E. ♠
Definition 9.
A set of extensions E is D-I-maximal if and only if ∀E1,E2 ∈
E , if E1 ⊆ E2 then E1 = E2. A semantics σ satisfies the D-I-maximality10
principle if and only if ∀AF ∈ Dσ Eσ(AF) is D-I-maximal. ♠
Definition 10. Given an argumentation framework AF = 〈A ,→ 〉, a non-
empty set S ⊆ A is D-unattacked if and only if ∃a ∈ (A  S) : a → S. The
set of D-unattacked sets of AF is denoted as US (AF). ♠
Definition 11. Let AF = 〈A ,→ 〉 be an argumentation framework. The15
restriction of AF to S ⊆ A is the argumentation framework AF↓S = 〈S,→
∩(S × S)〉. ♠
Definition 12.
A semantics σ satisfies the D-directionality principle if
and only if ∀AF = 〈A ,→ 〉,∀S ∈ US (AF),AE σ(AF,S) = Eσ(AF↓S), where
AE σ(AF,S) {(E ∩ S) | E ∈ Eσ(AF)} ⊆ 2S
. ♠20
1.2 Acceptability of Arguments [PV02; BG09a]
Definition 13. Given a semantics σ and an argumentation framework
〈A ,→ 〉, an argument AF ∈ Dσ is:
• skeptically justified iff ∀E ∈ Eσ(AF), a ∈ S;
• credulously justified iff ∃E ∈ Eσ(AF), a ∈ S. ♠25
Cardiff University, 2016 Page 4
Dung’s AF • (Some) Semantics [Dun95]
Definition 14. Given a semantics σ and an argumentation framework
〈A ,→ 〉, an argument AF ∈ Dσ is:
• justified iff it is skeptically justified;
• defensible iff it is credulously justified but not skeptically justified;
• overruled iff it is not credulously justified. ♠5
1.3 (Some) Semantics [Dun95]
Lemma 1 (Dung’s Fundamental Lemma, [Dun95, Lemma 10]). Given an
argumentation framework AF = 〈A ,→ 〉, let S ⊆ A be a D-admissible set
of arguments, and a,b be arguments which are acceptable with respect to
S. Then:10
1. S = S ∪{a} is D-admissible; and
2. b is D-acceptable with respect to S . ♣
Theorem 1 ([Dun95, Theorem 11]). Given an argumentation framework
AF = 〈A ,→ 〉, the set of all D-admissible sets of 〈A ,→ 〉 form a complete
partial order with respect to set inclusion. ♣15
Definition 15 (Complete Extension).
Given an argumentation frame-
work AF = 〈A ,→ 〉, S ⊆ A is a D-complete extension iff S is D-conflict-free
and S = FAF(S). C O denotes the complete semantics. ♠
Definition 16 (Grounded Extension).
Given an argumentation frame-
work AF = 〈A ,→ 〉. The grounded extension of AF is the least complete20
extension of AF. GR denotes the grounded semantics. ♠
Definition 17 (Preferred Extension).
Given an argumentation frame-
work AF = 〈A ,→ 〉. A preferred extension of AF is a maximal (w.r.t. set
inclusion) complete extension of AF. P R denotes the preferred seman-
tics. ♠25
Definition 18. Given an argumentation framework AF = 〈A ,→ 〉 and
S ⊆ A , S+
{a ∈ A | ∃b ∈ S ∧ b → a}. ♠
Definition 19 (Stable Extension).
Given an argumentation framework
AF = 〈A ,→ 〉. S ⊆ A is a stable extension of AF iff S is a preferred exten-
sion and S+
= A  S. S T denotes the stable semantics. ♠30
Cardiff University, 2016 Page 5
Dung’s AF • Labelling-Based Semantics Representation
[Cam06]
C O GR P R S T
D-conflict-free Yes Yes Yes Yes
D-admissibility Yes Yes Yes Yes
D-strongly admissibility No Yes No No
D-reinstatement Yes Yes Yes Yes
D-I-maximality No Yes Yes Yes
D-directionality Yes Yes Yes No
Table 1.1: Satisfaction of general properties by argumentation semantics
[BG07; BCG11]
S T
P R
C O GR
Figure 1.1: Relationships among argumentation semantics
1.4 Labelling-Based Semantics Representation
[Cam06]
Definition 20. Let Γ = Γ be an argumentation framework. A labelling
L ab ∈ L(Γ) is a complete labelling of Γ iff it satisfies the following condi-
tions for any a1 ∈ A :5
• L ab(a1) = in ⇔ ∀a2 ∈ a−
1 L ab(a2) = out;
• L ab(a1) = out ⇔ ∃a2 ∈ a−
1 : L ab(a2) = in. ♠
The grounded and preferred labelling can then be defined on the basis
of complete labellings.
Definition 21. Let Γ = Γ be an argumentation framework. A labelling10
L ab ∈ L(Γ) is the grounded labelling of Γ if it is the complete labelling of Γ
minimizing the set of arguments labelled in, and it is a preferred labelling
of Γ if it is a complete labelling of Γ maximizing the set of arguments
labelled in. ♠
In order to show the connection between extensions and labellings, let15
us recall the definition of the function Ext2Lab, returning the labelling
corresponding to a D-conflict-free set of arguments S.
Definition 22. Given an AF Γ = Γ and a D-conflict-free set S ⊆ A , the cor-
responding labelling Ext2Lab(S) is defined as Ext2Lab(S) ≡ L ab, where
• L ab(a1) = in ⇔ a1 ∈ S20
• L ab(a1) = out ⇔ ∃ a2 ∈ S s.t. a2 → a1
Cardiff University, 2016 Page 6
Dung’s AF • Labelling-Based Semantics Representation
[Cam06]
σ = C O σ = GR σ = P R σ = S T
EXISTSσ trivial trivial trivial NP-c
CAσ NP-c polynomial NP-c NP-c
SAσ polynomial polynomial Π
p
2 -c coNP-c
VERσ polynomial polynomial coNP-c polynomial
NEσ NP-c polynomial NP-c NP-c
Table 1.2: Complexity of decision problems by argumentation semantics
[DW09]
• L ab(a1) = undec ⇔ a1 ∉ S ∧ a2 ∈ S s.t. a2 → a1 ♠
[Cam06] shows that there is a bijective correspondence between the
complete, grounded, preferred extensions and the complete, grounded,
preferred labellings, respectively.
Proposition 1. Given an an AF Γ = Γ, L ab is a complete (grounded, pre-5
ferred) labelling of Γ if and only if there is a complete (grounded, preferred)
extension S of Γ such that L ab = Ext2Lab(S). ♣
The set of complete labellings of Γ is denoted as LC O (Γ), the set of
preferred labellings as LP R(Γ), while LGR(Γ) denotes the set including
the grounded labelling.10
Remark 1.
To exercise yourself, try Arg Teach [DS14] at http://www-argteach.
doc.ic.ac.uk/
Cardiff University, 2016 Page 7
Dung’s AF • Labelling-Based Semantics Representation
[Cam06]
Cardiff University, 2016 Page 8
Dung’s AF • Skepticism Relationships [BG09b]
GR
C O
P R
GR
C O
P RS T
Figure 1.2: S
⊕ relation for any argumentation framework (left) and for
argumentation framework where stable extensions exist (right).
1.5 Skepticism Relationships [BG09b]
E1
E
E2 denotes that E1 is at least as skeptical as E2.
Definition 23. Let E
be a skepticism relation between sets of exten-
sions. The skepticism relation between argumentation semantics S
is
such that for any argumentation semantics σ1 and σ2, σ1
S
σ2 iff ∀AF ∈5
Dσ1 ∩Dσ2 , EAF(σ1) E
EAF(σ2). ♠
Definition 24. Given two sets of extensions E1 and E2 of an argumenta-
tion framework AF:
• E1
E
∩+ E2 iff ∀E2 ∈ E2, ∃E1 ∈ E1: E1 ⊆ E2;
• E1
E
∪+ E2 iff ∀E1 ∈ E1, ∃E2 ∈ E2: E1 ⊆ E2. ♠10
Lemma 2. Given two argumentation semantics σ1 and σ2, if for any
argumentation framework AF EAF(σ1) ⊆ EAF(σ2), then σ1
E
∩+ σ2 and
σ1
E
∪+ σ2 (σ1
E
⊕ σ2). ♣
1.6 Signatures [Dun+14]
Let A be a countably infinite domain of arguments, and15
AFA = {〈A ,→〉 | A ⊆ A,→⊆ A ×A }.
Definition 25. The signature Σσ of a semantics σ is defined as
Σσ = {σ(F) | F ∈ AFA}
(i.e. the collection of all possible sets of extensions an AF can possess
under a semantics). ♠20
Given S ⊆ 2A
, ArgsS = S∈S S, PairsS = {〈a,b〉 | ∃S ∈ S s.t. {a,b} ⊆ S}. S
is called an extension-set if ArgsS is finite.
Definition 26. Let S ⊆ 2A
. S is incomparable if ∀S,S ∈ S, S ⊆ S implies
S = S . ♠
Cardiff University, 2016 Page 9
Dung’s AF • Signatures [Dun+14]
Definition 27. An extension-set S ⊆ 2A
is tight if ∀S ∈ S and a ∈ ArgsS
it holds that if S ∪ {a} ∈ S then there exists an b ∈ S such that 〈a,b〉 ∈
PairsS. ♠
Definition 28. S ⊆⊆ 2A
is adm-closed if for each A,B ∈ S the following
holds: if 〈a,b〉 ∈ PairsS for each a,b ∈ A ∪B, then also A ∪B ∈ S. ♠5
Proposition 2. For each F ∈ AFA:
• S T (F) is incomparable and tight;
• P R(F) is non-empty, incomparable and adm-closed. ♣
Theorem 2. The signatures for S T and P R are:
• ΣS T = {S | S is incomparable and tight};10
• ΣP R = {S = | S is incomparable and adm-closed}. ♣
Cardiff University, 2016 Page 10
Dung’s AF • Signatures [Dun+14]
Consider
S = { { a,d, e },
{ b, c, e },
{ a,b,d } }
Cardiff University, 2016 Page 11
Dung’s AF • Decomposability and Transparancy [Bar+14]
1.7 Decomposability and Transparancy [Bar+14]
Definition 29. Given an argumentation framework AF = (A ,→),
a labelling-based semantics σ associates with AF a subset of L(AF), de-
noted as Lσ(AF). ♠
Definition 30. Given AF = (A ,→) and a set Args ⊆ A , the input of Args,5
denoted as Argsinp, is the set {B ∈ A Args | ∃A ∈ Args,(B, A) ∈→}, the con-
ditioning relation of Args, denoted as ArgsR
, is defined as → ∩(Argsinp ×
Args). ♠
Definition 31. An argumentation framework with input is a tuple
(AF,I ,LI ,RI ), including an argumentation framework AF = (A ,→), a10
set of arguments I such that I ∩A = , a labelling LI ∈ LI and a rela-
tion RI ⊆ I × A . A local function assigns to any argumentation frame-
work with input a (possibly empty) set of labellings of AF, i.e.
F(AF,I ,LI ,RI ) ∈ 2L(AF)
. ♠
Definition 32. Given an argumentation framework with input15
(AF,I ,LI ,RI ), the standard argumentation framework w.r.t.
(AF,I ,LI ,RI ) is defined as AF = (A ∪ I ,→ ∪R I ), where I = I ∪
{A | A ∈ out(LI )} and R I = RI ∪ {(A , A) | A ∈ out(LI )} ∪ {(A, A) | A ∈
undec(LI )}. ♠
Definition 33. Given a semantics σ, the canonical local function of σ20
(also called local function of σ) is defined as Fσ(AF,I ,LI ,RI ) = {Lab↓A |
Lab ∈ Lσ(AF )}, where AF = (A ,→) and AF is the standard argumenta-
tion framework w.r.t. (AF,I ,LI ,RI ). ♠
Definition 34. A semantics σ is complete-compatible iff the following
conditions hold:25
1. For any argumentation framework AF = (A ,→), every labelling L ∈
Lσ(AF) satisfies the following conditions:
• if A ∈ A is initial, then L(A) = in
• if B ∈ A and there is an initial argument A which attacks B,
then L(B) = out30
• if C ∈ A is self-defeating, and there are no attackers of C be-
sides C itself, then L(C) = undec
2. for any set of arguments I and any labelling LI ∈ LI , the ar-
gumentation framework AF = (I ,→ ), where I = I ∪ {A | A ∈
out(LI )} and → = {(A , A) | A ∈ out(LI )}∪{(A, A) | A ∈ undec(LI )},35
admits a (unique) labelling, i.e. |Lσ(AF )| = 1. ♠
Cardiff University, 2016 Page 12
Dung’s AF • Extension-based I/O Characterisation
[GLW16]
Definition 35. A semantics σ is fully decomposable (or simply decom-
posable) iff there is a local function F such that for every argumenta-
tion framework AF = (A ,→) and every partition P = {P1,...Pn} of A ,
Lσ(AF) = U (P , AF,F) where U (P , AF,F) {LP1 ∪ ... ∪ LPn |
LPi
∈ F(AF↓Pi
,Pi
inp,( j=1···n,j=i LPj
)↓
Pi
inp,Pi
R
)}. ♠5
Definition 36. A complete-compatible semantics σ is top-down decom-
posable iff for any argumentation framework AF = (A ,→) and any parti-
tion P = {P1,...Pn} of A , it holds that Lσ(AF) ⊆ U (P , AF,Fσ). ♠
Definition 37. A complete-compatible semantics σ is bottom-up decom-
posable iff for any argumentation framework AF = (A ,→) and any parti-10
tion P = {P1,...Pn} of A , it holds that Lσ(AF) ⊇ U (P , AF,Fσ). ♠
C O S T GR P R
Full decomposability Yes Yes No No
Top-down decomposability Yes Yes Yes Yes
Bottom-up decomposability Yes Yes No No
Table 1.3: Decomposability properties of argumentation semantics.
1.8 Extension-based I/O Characterisation [GLW16]
Definition 38. Given input arguments I and output arguments O with
I ∩O = , an I/O-gadget is an AF F = (A,R) such that I,O ⊆ A and I−
F
=
. ♠15
Definition 39. Given an I/O-gadget F = (A,R) the injection of J ⊆ I to F
is the AF (F, J) = (A ∪{z},R ∪{(z, i) | i ∈ (I  J)}). ♠
Definition 40. An I/O-specification consists of two sets I,O ⊆ A and a
total function f : 2I
→ 22O
. ♠
Definition 41. The I/O-gadget F satisfies I/O-specification f under se-20
mantics σ iff ∀J ⊆ I : σ( (F, J))|O = f(J). ♠
Theorem 3. An I/O-specification f is satisfiable under σ iff
S T :
P R: ∀J ⊆ I : |f(J)| ≥ 1
C O: ∀J ⊆ I : |f(J)| ≥ 1∧ f(J) ∈ f(J)
GR: ∀J ⊆ I : |f(J)| = 1
♣
Cardiff University, 2016 Page 13
2 Implementations
Acknowledgement
This handout include material from a number of collaborators including
Massimiliano Giacomin, Mauro Vallati, and Stefan Woltran.
Comprehensive survey recently published in [Cha+15].5
2.1 Ad Hoc Procedures
NAD-Alg [NDA12; NAD14]
2.2 Constraint Satisfaction Programming
A Constraint Satisfaction Problem (CSP) P [BS12; RBW08] is a triple
P = 〈X,D,C〉 such that:10
• X = 〈x1,...,xn〉 is a tuple of variables;
• D = 〈D1,...,Dn〉 a tuple of domains such that ∀i,xi ∈ Di;
• C = 〈C1,...,Ct〉 is a tuple of constraints, where ∀j,Cj = 〈RSj
,Sj〉,
Sj ⊆ {xi|xi is a variable}, RSj
⊆ SD
j
× SD
j
where SD
j
= {Di|Di is a
domain, and xi ∈ Sj}.15
A solution to the CSP P is A = 〈a1,...,an〉 where ∀i,ai ∈ Di and ∀j,RSj
holds on the projection of A onto the scope Sj. If the set of solutions is
empty, the CSP is unsatisfiable.
Cardiff University, 2016 Page 14
Implementations • Answer Set Programming
CONArg2 [BS12]
In [BS12], the authors propose a mapping from AFs to CSPs.
Given an AF Γ, they first create a variable for each argument whose
domain is always {0,1} — ∀ai ∈ A ,∃xi ∈ X such that Di = {0,1}.
Subsequently, they describe constraints associated to different defi-5
nitions of Dung’s argumentation framework: for instance {a1,a2} ⊆ A is
D-conflict-free iff ¬(x1 = 1∧ x2 = 1).
2.3 Answer Set Programming
Answer Set Programming (ASP) [Fab13] is a declarative problem solving
paradigm. In ASP, representation is done using a rule-based language,10
while reasoning is performed using implementations of general-purpose
algorithms, referred to as ASP solvers.
AspartixM [EGW10; Dvo+11]
AspartixM [Dvo+11] expresses argumentation semantics in Answer Set
Programming (ASP): a single program is used to encode a particular ar-15
gumentation semantics, and the instance of an argumentation framework
is given as an input database. Tests for subset-maximality exploit the
metasp optimisation frontend for the ASP-package gringo/claspD.
Given an AF Γ, Aspartix encodes the requirements for a “semantics”
(e.g. the D-conflict-free requirements) in an ASP program whose database20
considers:
{arg(a) | a ∈ A }∪{defeat(a1,a2) | 〈a1,a2〉 ∈→}
The following program fragment is thus used to check the D-conflict-
freeness [Dvo+11]:
πcf = { in(X) ← not out(X),arg(X);
out(X) ← not in(X),arg(X);
← in(X),in(Y ),defeat(X,Y )}.
25
πS T = { in(X) ← not out(X),arg(X);
out(X) ← not in(X),arg(X);
← in(X),in(Y ),defeat(X,Y );
defeated(X) ← in(Y ),defeat(Y , X);
← out(X),not defeated(X)}.
2.4 Propositional Satisfiability Problems
In the propositional satisfiability problem (SAT) the goal is to determine
whether a given Boolean formula is satisfiable. A variable assignment
that satisfies a formula is a solution.30
Cardiff University, 2016 Page 15
Implementations • Propositional Satisfiability Problems
In SAT, formulae are commonly expressed in Conjunctive Normal Form
(CNF). A formula in CNF is a conjunction of clauses, where clauses are
disjunctions of literals, and a literal is either positive (a variable) or neg-
ative (the negation of a variable). If at least one of the literals in a clause
is true, then the clause is satisfied, and if all clauses in the formula are5
satisfied then the formula is satisfied and a solution has been found.
PrefSAT [Cer+14b]
Requirements for complete labelling as a CNF [Cer+14b]: for each argu-
ment ai ∈ A , three propositional variables are considered: Ii (which is
true iff L ab(ai) = in), Oi (which is true iff L ab(ai) = out), Ui (which is10
true iff L ab(ai) = undec). Given |A | = k and φ : {1,...,k} → A .
i∈{1,...,k}
(Ii ∨Oi ∨Ui)∧(¬Ii ∨¬Oi)∧(¬Ii ∨¬Ui)∧(¬Oi ∨¬Ui) (2.1)
{i|φ(i)−= }
Ii (2.2)
{i|φ(i)−= }
Ii ∨
{j|φ(j)→φ(i)}
(¬Oj) (2.3)
{i|φ(i)−= } {j|φ(j)→φ(i)}
¬Ii ∨Oj (2.4)15
{i|φ(i)−= } {j|φ(j)→φ(i)}
¬I j ∨Oi (2.5)
{i|φ(i)−= }
¬Oi ∨
{j|φ(j)→φ(i)}
I j (2.6)
{i|φ(i)−= } {k|φ(k)→φ(i)}
Ui ∨¬Uk ∨
{j|φ(j)→φ(i)}
I j (2.7)
{i|φ(i)−= } {j|φ(j)→φ(i)}
(¬Ui ∨¬I j) ∧ ¬Ui ∨
{j|φ(j)→φ(i)}
Uj (2.8)
i∈{1,...k}
Ii (2.9)20
Cardiff University, 2016 Page 16
Implementations • Propositional Satisfiability Problems
As noticed in [Cer+14b], the conjunction of the above formulae is re-
dundant. However, the non-redundant CNFs are not equivalent from an
empirical evaluation [Cer+14b]: the overall performance is significantly
affected by the chosen configuration pair CNF encoding–SAT solver.
Cardiff University, 2016 Page 17
Implementations • Propositional Satisfiability Problems
Algorithm 1 Enumerating the D-preferred extensions of an AF
PrefSAT(Γ)
1: Input: Γ = Γ
2: Output: Ep ⊆ 2A
3: Ep :=
4: cnf := ΠΓ
5: repeat
6: cnf df := cnf
7: pref cand :=
8: repeat
9: lastcompf ound := SatS(cnf df )
10: if lastcompf ound ! = ε then
11: pref cand := lastcompf ound
12: for a1 ∈ I-ARGS(lastcompf ound) do
13: cnf df := cnf df ∧ Iφ−1(a1)
14: end for
15: remaining := F ALSE
16: for a1 ∈ A I-ARGS(lastcompf ound) do
17: remaining := remaining ∨ Iφ−1(a1)
18: end for
19: cnf df := cnf df ∧ remaining
20: end if
21: until (lastcompf ound ! = ε∧I-ARGS(lastcompf ound) ! = A )
22: if pref cand ! = then
23: Ep := Ep ∪{I-ARGS(pref cand)}
24: oppsolution := F ALSE
25: for a1 ∈ A I-ARGS(pref cand) do
26: oppsolution := oppsolution∨ Iφ−1(a1)
27: end for
28: cnf := cnf ∧ oppsolution
29: end if
30: until (pref cand ! = )
31: if Ep = then
32: Ep = { }
33: end if
34: return Ep
Cardiff University, 2016 Page 18
Implementations • Propositional Satisfiability Problems
Parallel-SCCp [Cer+14a; Cer+15]
Based on the SCC-Recursiveness Schema [BGG05].
ab
ef
cdgh
Cardiff University, 2016 Page 19
Implementations • Propositional Satisfiability Problems
Algorithm 1 Computing D-preferred labellings of an AF
P-PREF(Γ)
1: Input: Γ = Γ
2: Output: Ep ∈ 2L(Γ)
3: return P-SCC-REC(Γ,A )
Algorithm 2 Greedy computation of base cases
GREEDY(L,C)
1: Input: L = (L1
,...,Ln
:= {Sn
1 ,...,Sn
h
}),C ⊆ A
2: Output: M = {...,(Si,Bi),...}
3: M :=
4: for S ∈ n
i=1
Li
do in parallel
5: B := B-PR(Γ↓S,S ∩C)
6: M = M ∪{(S,B)}
7: end for
8: return M
BOUNDCOND(Γ,Si,L ab) returns (O, I) where O = {a1 ∈ Si | ∃a2 ∈
S ∩ a−
1 : L ab(a2) = in} and I = {a1 ∈ Si | ∀ a2 ∈ S ∩ a−
1 ,L ab(a2) = out},
with S ≡ S1 ∪...∪ Si−1.
Cardiff University, 2016 Page 20
Implementations • Propositional Satisfiability Problems
Algorithm 3 Determining the D-grounded labelling of an AF in a set C
GROUNDED(Γ,C)
1: Input: Γ = Γ, C ⊆ A
2: Output: (L ab,U) : U ⊆ A ,L ab ∈ LA U
3: L ab :=
4: U := A
5: repeat
6: initial f ound := ⊥
7: for a1 ∈ C do
8: if {a2 ∈ U | a2 → a1} = then
9: initial f ound :=
10: L ab := L ab ∪{(a1,in)}
11: U := U a1
12: C := C a1
13: for a2 ∈ (U ∩a+
1 ) do
14: L ab := L ab ∪{(a2,out)}
15: U := U a2
16: C := C a2
17: end for
18: end if
19: end for
20: until (initial f ound)
21: return(L ab,U)
Cardiff University, 2016 Page 21
Implementations • Propositional Satisfiability Problems
Algorithm 4 Computing D-preferred labellings of an AF in C
P-SCC-REC(Γ,C)
1: Input: Γ = Γ, C ⊆ A
2: Output: Ep ∈ 2L(Γ)
3: (L ab,U) = GROUNDED(Γ,C)
4: Ep := {L ab}
5: Γ = Γ↓U
6: L:= (L1
:= {S1
1,...,S1
k
},...,Ln
:= {Sn
1 ,...,Sn
h
})
= SCCS-LIST(Γ)
7: M := {...,(Si,Bi),...} = GREEDY(L,C)
8: for l ∈ {1,...,n} do
9: El := {E
S1
l
:= (),...,E
Sk
l
:= ()}
10: for S ∈ Ll
do in parallel
11: for L ab ∈ Ep do in parallel
12: (O, I) := L-COND(Γ,S,Ll
,L ab)
13: if I = then
14: ES
l
[L ab] ={{(a1,out) | a1 ∈ O} ∪{(a1,undec) | a1 ∈ S O}}
15: else
16: if I = S then
17: ES
l
[L ab] = B where (S,B) ∈ M
18: else
19: if O = then
20: ES
l
[L ab] = B-PR(Γ↓S, I ∩C)
21: else
22: ES
l
[L ab]={{(a1,out) | a1 ∈ O}}
23: ES
l
[L ab] = ES
l
[L ab]⊗P-SCC-REC(Γ↓SO, I ∩C)
24: end if
25: end if
26: end if
27: end for
28: end for
29: for S ∈ Ll
do
30: Ep :=
31: for L ab ∈ Ep do in parallel
32: Ep = Ep ∪({L ab}⊗ ES
l
[L ab])
33: end for
34: Ep := Ep
35: end for
36: end for
37: return Ep
Cardiff University, 2016 Page 22
Implementations • Second-order Solver [BJT16]
2.5 Second-order Solver [BJT16]
http://research.ics.aalto.fi/software/sat/sat-to-sat/so2grounder.
shtml
Given a representation of an argumentation framework such as:
• a(X) holds iff X is an argument;5
• r(X,Y ) holds iff X attacks Y ;
then:
• TCF = { N,M | r(N,M) ∧ s(N) ∧ s(M).}
• TAD =
∀N | att(N) ⇐⇒ ( a(N) ∧ ∃M | r(M,N) ∧ s(M) ).
∀N | def (N) ⇐⇒ ( a(N) ∧ ∀M | r(M,N) =⇒ att(M) ).
• TFP = {TAD. ∀N | s(N) ⇐⇒ def (N).}10
• TGR =



TFP.
s ,att ,def :
TFP[s/s ,def /def ,att/att ] ∧
( ∀N | s (N) =⇒ s(N) ) ∧
( ∃N | s(N)∧¬s (N) )



• TST = {TAD. ∀N | a(N) =⇒ ( s(N) ⇐⇒ ¬att(N) ).}
• TCO = {TFP. TCF.}
• TPR =



TCO.
s ,att, ,def :
TCO[s/s ,def /def ,att/att ] ∧
( ∀N | s(N) =⇒ s (N) ) ∧
∃N | s (N) ∧ ¬s(N).



The unary predicate s describes the extensions in the various seman-15
tics.
2.6 Which One?
We need to be smart
Holger H. Hoos, Invited Keynote Talk at ECAI2014
Features for AFs [VCG14; CGV14]20
Directed Graph (26 features)
Cardiff University, 2016 Page 23
Implementations • Which One?
Structure:
# vertices ( |A | )
# edges ( | → | )
# vertices / #edges ( |A |/| → | )
# edges / #vertices ( | → |/|A | )
density
average
Degree: stdev
attackers max
min
#
average
stdev
max
SCCs:
min
Structure:
# self-def
# unattacked
flow hierarchy
Eulerian
aperiodic
CPU-time: . . .
Cardiff University, 2016 Page 24
Implementations • Which One?
Undirected Graph (24 features)
Structure:
# edges
# vertices / #edges
# edges / #vertices
density
Degree:
average
stdev
max
min
SCCs:
#
average
stdev
max
min
Structure: Transitivity
3-cycles:
#
average
stdev
max
min
CPU-time: . . .
Average CPU-time, stdev, needed for extracting the features
Direct Graph Features (DG)
Class CPU-Time # feat
Mean stdDev
Graph Size 0.001 0.009 5
Degree 0.003 0.009 4
SCC 0.046 0.036 5
Structure 2.304 2.868 5
Undirect Graph Features (UG)
Class CPU-Time # feat
Mean stDev
Graph Size 0.001 0.003 4
Degree 0.002 0.004 4
SCC 0.011 0.009 5
Structure 0.799 0.684 1
Triangles 0.787 0.671 5
5
Best Features for Runtime Prediction [CGV14]
Determined by a greedy forward search based on the Correlation-based
Feature Selection (CFS) attribute evaluator.
Cardiff University, 2016 Page 25
Implementations • Which One?
Solver B1 B2 B3
AspartixM num. arguments density (DG) size max. SCC
PrefSAT density (DG) num. SCCs aperiodicity
NAD-Alg density (DG) CPU-time density CPU-time Eulerian
SSCp density (DG) num. SCCs size max SCC
Predicting the (log)Runtime [CGV14]
RSME of Regression (Lower is better)
B1 B2 B3 DG UG SCC All
AspartixM 0.66 0.49 0.49 0.48 0.49 0.52 0.48
PrefSAT 1.39 0.93 0.93 0.89 0.92 0.94 0.89
NAD-Alg 1.48 1.47 1.47 0.77 0.57 1.61 0.55
SSCp 1.36 0.80 0.78 0.75 0.75 0.79 0.74
Log runtime is defined as
n
i=1 log10( ti )−log10( yi )
2
n
5
Best Features for Classification [CGV14]
Determined by a greedy forward search based on the Correlation-based
Feature Selection (CFS) attribute evaluator.
C-B1 C-B2 C-B3
num. arguments density (DG) min attackers
Classification [CGV14]10
Classification (Higher is better)
C −B1 C-B2 C-B3 DG UG SCC All
Accuracy 48.5% 70.1% 69.9% 78.9% 79.0% 55.3% 79.5%
Prec. AspartixM 35.0% 64.6% 63.7% 74.5% 74.9% 42.2% 76.1%
Prec. PrefSAT 53.7% 67.8% 68.1% 79.6% 80.5% 60.4% 80.1%
Prec. NAD-Alg 26.5% 69.2% 69.0% 81.7% 85.1% 35.3% 86.0%
Prec. SSCp 54.3% 73.0% 72.7% 76.6% 76.8% 57.8% 77.2%
Selecting the Best Algorithm [CGV14]
Metric: Fastest
(max. 1007)
AspartixM 106
NAD-Alg 170
PrefSAT 278
SSCp 453
EPMs Regression 755
EPMs Classification 788
Cardiff University, 2016 Page 26
Implementations • Which One?
Metric: IPC
(max. 1007)
NAD-Alg 210.1
AspartixM 288.3
PrefSAT 546.7
SSCp 662.4
EPMs Regression 887.7
EPMs Classification 928.1
IPC score1
: for each AF, each system gets a score of T∗
/T, where T
is its execution time and T∗
the best execution time among the compared
systems, or a score of 0 if it fails in that case. Runtimes below 0.01 seconds
get by default the maximal score of 1. The IPC score considers, at the5
same time, the runtimes and the solved instances
1 http://ipc.informatik.uni-freiburg.de/ .
Cardiff University, 2016 Page 27
3 Ranking-Based Semantics
3.1 The Categoriser Semantics [BH01]
Definition 42 ([BH01]). Let Γ = 〈A ,→〉 be an argumentation framework.
The categoriser function Cat : A →]0,1] is defined as:
Cat(a1) =



1 if a−
1 =
1
1+ a2∈a−
1
Cat(a2)
otherwise
5
♠
3.2 Properties for Ranking-Based Semantics
[Bon+16]
Preliminary notions
Definition 43. Let Γ = 〈A ,→〉 and a1,a2 ∈ A . A path from a2 to a1,10
noted P(a2,a1) is a sequence s = 〈b0,...,bn〉 of arguments such as b0 = a1,
bn = a2, and ∀i < n,〈bi+1,bi〉 ∈ A . We denote by lP = n the length of P.
A defender (resp. attacker) of a1 is an argument situated at the begin-
ning of an even-length (resp. odd-length) path. We denote the multiset
of defenders and attackers of a1 by R+
n {a2 | ∃P(a2,a1) with lP ∈ 2N} and15
R−
n = {a2 | ∃P(a2,a1) with lP ∈ 2N + 1} respectively. The direct attack-
ers of a1 are arguments in R−
1 (a1) = a−
1 . An argument a1 is defended if
R+
2 (a1) = {a−
1 }−
= .
A defence root (resp. attack root) is a non-attacked defender (resp.
attacker). We denote the mulitset of defence roots and attacks roots of a120
by BR+
n (a1) = {a2 ∈ R+
n (a1) | a−
2 = } and BR−
n (a1) = {a2 ∈ R−
n (a1) | a−
2 =
} respectively. A path from a2 to a1 is a defence branch (resp. attack
branch) if a2 is a defence (resp. attack) root of a1. Let us note BR+
(a1) =
n BR+
n (a1) and BR−
(a1) = n BR−
n (a1). ♠
Definition 44. A ranking-based semantics σ associates to any argumen-25
tation framework Γ = 〈A ,→〉 is a ranking σ
Γ on A , where σ
Γ is a preorder
(a reflexive and transitive relation) on A . a1
σ
Γ a2 means that a1 is at
least as acceptable as a2. a1
σ
Γ a2 iff a1
σ
Γ a2 and a2
σ
Γ a1. ♠
Definition 45. A lexicographical order between two vectors of real num-
ber V = 〈V1,...,Vn〉 and V = 〈V1,...,Vn〉, is defined as V lex V iff ∃i ≤ n30
s.t. Vi ≥ Vi
and ∀j < i, Vj = Vj
. ♠
Cardiff University, 2016 Page 28
Ranking-Based Semantics • Properties for Ranking-
Based Semantics [Bon+16]
Definition 46. An isomorphism γ between two argumentation frame-
works Γ = 〈A ,→〉 and Γ = 〈A ,→ 〉 is a bijective function γ : A → A such
that ∀a24,a25 ∈ A , 〈a24,a25〉 ∈→ iff 〈γ(a24),γ(a25)〉 ∈→ . With a slight
abuse of notation, we will note Γ = γ(Γ). ♠
Definition 47 ([AB13]). Let ≥S be a ranking on a set of argument A .5
For any S1,S2 ⊆ A , S1 ≥S S2 is a group comparison iff there exists an
injective mapping f from S2 to S1 such that ∀a1 ∈ S2, f (a1) a1. An
S1 >S S2 is a strict group comparison iff S1 ≥S S2 and (|S2| < |S1| or
∃a1 ∈ S2, f (a1) a1). ♠
Definition 48. Let Γ = 〈A ,→〉 and a1 ∈ A . The defence of a1 is simple iff10
every defender of a1 attacks exactly one direct attacker of a2. The defence
of a1 is distributed iff every direct attacker of a1 is attacked by at most
one argument. ♠
Definition 49. Let Γ = 〈A ,→〉, a1 ∈ A . The defence branch added to a1
is P+(a1) = 〈A ,→ 〉, with A = {b0,...,bn},n ∈ 2N,b0 = a1,A ∩ A = {a1},15
and → = {〈bi,bi−1〉 | i ≤ n}. The attack branch added to a1, denoted
P−(a1) is defined similarly except that the sequence is of odd length (i.e.
n = 2N+1). ♠
Properties
Given a ranking-based semantics σ, Γ = 〈A ,→〉, ∀a1,a2 ∈ A :20
Abstraction (Abs) [AB13]. The ranking on A should be defined only
on the basis of the attacks between arguments.
Let Γ = 〈A ,→ 〉. For any isomorphism γ s.t. Γ = γ(Γ), a1
σ
Γ a2 iff
γ(a1) σ
Γ
γ(a2).
Independence (In) [MT08; AB13]. The ranking between two argu-25
ments a1 and a2 should be independent of any argument that is neither
connected to a1 nor to a2.
∀Γ = 〈A ,→ 〉 ∈ cc(Γ),1
∀a1,a2 ∈ A , a1
σ
Γ
a2 ⇒ a1
σ
Γ a2.
Void Precedence (VP) [CL05; MT08; AB13]. A non-attacked argu-
ment is ranked strictly higher than any attacked argument.30
a−
1 = and a−
2 = ⇒ a1
σ
a2.
Self-Contradiction (SC) [MT08]. A self-attacking argument is ranked
lower than any non self-attacking argument.
〈a1,a1〉 =→ and 〈a2,a2〉 ∈→ ⇒ a1
σ
a2.
1cc(Γ) denotes the set of connected components of an AF Γ.
Cardiff University, 2016 Page 29
Ranking-Based Semantics • Properties for Ranking-
Based Semantics [Bon+16]
Cardinality Precedence (CP) [AB13]. The greater the number of di-
rect attackers for an argument, the weaker the level of acceptability of
this argument.
|a−
1 | < |a−
2 | ⇒ a1
σ
a2.
Quality Precedence (QP) [AB13]. The greater the acceptability of5
one direct attacker for an argument, the weaker the level of acceptability
of this argument.
∃a3 ∈ a−
2 s.t. ∀a4 ∈ a−
1 , a3
σ
a4 ⇒ a1
σ
a2.
Counter-Transitivity (CT) [AB13]. If the direct attackers of a2 are
at least as numerous and acceptable as those of a1, then a1 is at least as10
acceptable as a2.
a−
2 ≥S a−
1 ⇒ a1
σ
a2.
Strict Counter-Transitivity (SCT) [AB13]. If CT is satisfied and ei-
ther the direct attackers of a2 are strictly more numerous or acceptable
than those of a1, then a1 is strictly more acceptable than a2.15
a−
2 >S a−
1 ⇒ a1
σ
a2.
Defence Precedence (DP) [AB13]. For two arguments with the same
number of direct attackers, a defended argument is ranked higher than a
non-defended argument.
|a−
1 | = |a−
2 |, {a−
1 }−
= and {a−
2 }−
= ⇒ a1
σ
a2.20
Distributed-Defence Precedence (DDP) [AB13]. The best defense
is when each defender attacks a distinct attacker.
|a−
1 | = |a−
2 | and {a−
1 }−
= {a−
2 }−
, if the defence of a1 is simple and distributed
and the defence of a2 is simple but not distributed, then a1
σ
a2.
Strict addition of Defence Branch (⊕DB) [CL05]. Adding a defence25
branch to any argument improves its ranking.
Given γ an isomorphism. If Γ∗
= Γ∪γ(Γ)∪ P+(γ(a1)), then γ(a1) σ
Γ+ a1.
Increase of Defence Branch (↑DB) [CL05]. Increasing the length of
a defence branch of an argument degrades its ranking.
Given γ an isomorphism. If a2 ∈ BR+
(a1), a2 ∉ BR−
(a1) and Γ∗
= Γ∪γ(Γ)∪30
P+(γ(a2)), then a1
σ
Γ∗ γ(a1).
Addition of Defence Branch (+DB) [CL05]. Adding a defence branch
to an attached argument improves its ranking.
Given γ an isomorphism. If Γ∗
= Γ ∪ γ(Γ) ∪ P+(γ(a1)) and |a−
1 | = 0, then
γ(a1) σ
Γ+ a1.35
Cardiff University, 2016 Page 30
Ranking-Based Semantics • Properties for Ranking-
Based Semantics [Bon+16]
Increase of Attack Branch (↑AB) [CL05]. Increasing the length of
an attack branch of an argument improves its ranking.
Given γ an isomorphism. If a2 ∈ BR−
(a1), a2 ∉ BR+
(a1) and Γ∗
= Γ∪γ(Γ)∪
P+(γ(a2)), then γ(a1) σ
Γ∗ a1.
Addition of Attack Branch (+AB) [CL05]. Adding an attack branch5
to any argument degrades its ranking.
Given γ an isomorphism. If Γ∗
= Γ∪γ(Γ)∪ P−(γ(a1)), then a1
σ
Γ∗ γ(a1).
Total (Tot) [Bon+16]. All pairs of arguments can be compared.
a1
σ
a2 or a2
σ
a1.
Non-attacked Equivalence (NaE) [Bon+16]. All the non-attacked10
argument have the same rank.
a−
1 = and a−
2 = ⇒ a1
σ
a2.
Attack vs Full Defence (AvsFD) [Bon+16]. An argument without
any attack branch is ranked higher than an argument only attacked by
one non-attacked argument.15
Γ is acyclic, |BR−
(a1)| = 0, |a−
2 | = 1, and |{a−
2 }−
| = 0 ⇒ a1
σ
a2.
CP incompatible with QP [AB13]
CP incompatible with AvsFD [Bon+16]
CP incompatible with +DB [Bon+16]
VP incompatible with ⊕DB [Bon+16]
Table 3.1: Incompatible properties
SCT implies VP [AB13]
CT implies DP [AB13]
SCT implies CT [Bon+16]
CT implies NaE [Bon+16]
⊕DB implies +DB [Bon+16]
Table 3.2: Dependencies among properties
Cardiff University, 2016 Page 31
Ranking-Based Semantics • Properties for Ranking-
Based Semantics [Bon+16]
Property Yes/No Comment
Abs Yes
In Yes
VP Yes Implied by SCT
DP Yes Implied by CT
CT Yes Implied by SCT
SCT Yes
CP No
QP No
DDP No
SC No
⊕DB No Incompatible with VP
+AB Yes
+DB No
↑AB Yes
↑DB Yes
Tot Yes
NaE Yes Implied by CT
AvsFD No
Table 3.3: Properties satisfied by Cat [BH01]
Cardiff University, 2016 Page 32
4 Argumentation Schemes
Argumentation schemes [WRM08] are reasoning patterns which generate
arguments:
• deductive/inductive inferences that represent forms of common types
of arguments used in everyday discourse, and in special contexts5
(e.g. legal argumentation);
• neither deductive nor inductive, but defeasible, presumptive, or ab-
ductive.
Moreover, an argument satisfying a pattern may not be very strong by
itself, but may be strong enough to provide evidence to warrant rational10
acceptance of its conclusion, given that it premises are acceptable.
According to Toulmin [Tou58] such an argument can be plausible and
thus accepted after a balance of considerations in an investigation or dis-
cussion moved forward as new evidence is being collected. The investiga-
tion can then move ahead, even under conditions of uncertainty and lack15
of knowledge, using the conclusions tentatively accepted.
4.1 An example: Walton et al. ’s Argumentation
Schemes for Practical Reasoning
Suppose I am deliberating with my spouse on what to do
with our pension investment fund — whether to buy stocks,20
bonds or some other type of investments. We consult with a
financial adviser, and expert source of information who can
tell us what is happening in the stock market, and so forth at
the present time [Wal97].
Premises for practical inference:25
1. states that an agent (“I” or “my”) has a particular goal;
2. states that an agent has a particular goal.
〈S0,S1,...,Sn〉 represents a sequence of states of affairs that can be
ordered temporally from earlier to latter. A state of affairs is meant to be
like a statement, but one describing some event or occurrence that can30
be brought about by an agent. It may be a human action, or it may be a
natural event.
Cardiff University, 2016 Page 33
Argumentation Schemes • AS and Dialogues
Practical Inference
Premises:
Goal Premise Bringing about Sn is my goal
Means Premise In order to bring about Sn, I need to bring
about Si
Conclusions:
Therefore, I need to bring about Si.
Critical questions:
Other-Means
Question
Are there alternative possible actions to
bring about Si that could also lead to the
goal?
Best-Means
Question
Is Si the best (or most favourable) of the
alternatives?
Other-Goals
Question
Do I have goals other than Si whose
achievement is preferable and that
should have priority?
Possibility
Question
Is it possible to bring about Si in the
given circumstances?
Side Effects
Question
Would bringing about Si have known bad
consequences that ought to be taken into
account?
4.2 AS and Dialogues
Dialogue for practical reasoning: all moves (propose, prefer, justify) are co-
ordinated in a formal deliberation dialogue that has eight stages [HMP01].
1. Opening of the deliberation dialogue, and the raising of a governing
question about what is to be done.5
2. Discussion of: (a) the governing question; (b) desirable goals; (c)
any constraints on the possible actions which may be considered;
(d) perspectives by which proposals may be evaluated; and (e) any
premises (facts) relevant to this evaluation.
3. Suggesting of possible action-options appropriate to the governing10
question.
4. Commenting on proposals from various perspectives.
Cardiff University, 2016 Page 34
Argumentation Schemes • AS and Dialogues
5. Revising of: (a) the governing question, (b) goals, (c) constraints, (d)
perspectives, and/or (e) action-options in the light of the comments
presented; and the undertaking of any information-gathering or
fact-checking required for resolution.
6. Recommending an option for action, and acceptance or non-accept-5
ance of this recommendation by each participant.
7. Confirming acceptance of a recommended option by each partici-
pant.
8. Closing of the deliberation dialogue.
Proposals are initially made at stage 3, and then evaluated at stages10
4, 5 and 6.
Especially at stage 5, much argumentation taking the form of practi-
cal reasoning would seem to be involved.
As discussed in [Wal06], there are three dialectical adequacy condi-
tions for defining the speech act of making a proposal.15
The Proponent’s Requirement (Condition 1). The proponent
puts forward a statement that describes an action and says that
both proponent and respondent (or the respondent group) should
carry out this action.
The proponent is committed to carrying out that action: the state-20
ment has the logical form of the conclusion of a practical inference,
and also expresses an attitude toward that statement.
The Respondent’s Requirement (Condition 2). The statement
is put forward with the aim of offering reasons of a kind that will
lead the respondent to become committed to it.25
The Governing Question Requirement (Condition 3). The job
of the proponent is to overcame doubts or conflicts of opinions, while
the job of the respondent is to express them. Thus the role of the
respondent is to ask questions that cast the prudential reasonable-
ness of the action in the statement into doubt, and to mount attacks30
(counter-arguments and rebuttals) against it.
Condition 3 relates to the global structure of the dialogue, whereas
conditions 1 and 2 are more localised to the part where the proposal was
made. Condition 3 relates to the global burden of proof [Wal14] and the
roles of the two parties in the dialogue as a whole.35
Speech acts [MP02], like making a proposal, are seen as types of
moves in a dialogue that are governed by rules. Three basic character-
istics of any type of move that have to be defined:
Cardiff University, 2016 Page 35
Argumentation Schemes • AS and Dialogues
1. pre-conditions of the move;
2. the conditions defining the move itself;
3. the post-conditions that state the result of the move.
Preconditions
• At least two agents (proponent and opponent);5
• A governing question;
• Set of statements (propositions);
• The proponent proposes the proposition to the respondent if and
only if:
1. there is a set of premises that the proponent is committed to,10
and fit the premises of the argumentation scheme for practical
reasoning;
2. the proponent is advocating these premises, that is, he is mak-
ing a claim that they are true or applicable in the case at issue;
3. there is an inference from these premises fitting the argumen-15
tation scheme for practical reasoning; and
4. the proposition is the conclusion of the inference.
The Defining Conditions
The central defining condition sets out the conditions defining the struc-
ture of the move of making a proposal.20
The Goal Statement: We have a goal G.
The Means Statement: Bringing about p is necessary (or suffi-
cient) for us to bring about G.
Then the inference follows.
The Proposal Statement: We should (practically ought to) bring25
about p.
Cardiff University, 2016 Page 36
Argumentation Schemes • AS and Dialogues
Proposal Statement in form of AS
Premises:
Goal Statement We have a goal G.
The Means
Statement
Bringing about p is necessary (or suffi-
cient) for us to bring about G.
Conclusions:
We should (practically ought to) bring
about p.
The Post-Conditions
The central post-condition is the response condition.
The proposal must be open to critical questioning by opponent. The
proponent should be open to answering doubts and objections correspond-5
ing to any one of the five critical questions for practical reasoning; as well
as to counter-proposals, and is in charge of giving reasons why her pro-
posal is better than the alternatives.
The response condition set by these critical questions helps to explain
how and why the maker of a proposal needs to be open to questioning and10
to requests for justification.
Cardiff University, 2016 Page 37
5 A Semantic-Web View of
Argumentation
Acknowledgement
This handout include material from a number of collaborators including
Chris Reed. An overview can also be find at [Bex+13].5
5.1 The Argument Interchange Format [Rah+11]
Node Graph
(argument
network)
has-a
Information
Node
(I-Node)
is-a
Scheme Node
S-Node
has-a
Edge
is-a
Rule of inference
application node
(RA-Node)
Conflict application
node (CA-Node)
Preference
application node
(PA-Node)
Derived concept
application node (e.g.
defeat)
is-a
...
ContextScheme
Conflict
scheme
contained-in
Rule of inference
scheme
Logical inference
scheme
Presumptive
inference scheme
...
is-a
Logical conflict
scheme
is-a
...
Preference
scheme
Logical preference
scheme
is-a
...
Presumptive
preference scheme
is-a
uses uses uses
Figure 5.1: Original AIF Ontology [Che+06; Rah+11]
5.2 An Ontology of Arguments [Rah+11]
Please download Protégé from http://protege.stanford.edu/ and the
AIF OWL version from http://www.arg.dundee.ac.uk/wp-content/
uploads/AIF.owl10
Representation of the argument described in Figure 5.2
___jobArg : PracticalReasoning_Inference
fulfils(___jobArg, PracticalReasoning_Scheme)
hasGoalPlan_Premise(___jobArg, ___jobArgGoalPlan)
hasConclusion(___jobArg, ___jobArgConclusion)15
hasGoal_Premise(___jobArg, ___jobArgGoal)
___jobArgConclusion : EncouragedAction_Statement
fulfils(___jobArgConclusion, EncouragedAction_Desc)
Cardiff University, 2016 Page 38
Semantic Web Argumentation • AIF-OWL
Practical
Inference
Bringing about
is my goal
Sn
Si
In order to bring about
I need to bring about
Sn
Therefore I need
to bring about Si
hasConcDeschasPremiseDesc
hasPremiseDesc
Bringing about being rich
is my goal
In order to bring about being rich
I need to bring about having a job
fulfilsPremiseDesc
fulfilsPremiseDesc
fulfilsScheme
supports
supports
Therefore I need
to bring about
having a job
hasConclusion
fulfils
Figure 5.2: An argument network linking instances of argument and
scheme components
Symmetric attack
r → p
r pMP2
A1
A2
p → q
p
qMP1
neg1
Undercut attack
r MP2
A3
A2 s → v
s
vMP1
cut1
p
r → p
Figure 5.3: Examples of conflicts [Rah+11, Fig. 2]
claimText (___jobArgConclusion "Therefore I need to bring about hav-
ing a job")
___jobArgGoal : Goal_Statement
fulfils(___jobArgGoal, Goal_Desc)
claimText (___jobArgGoal "Bringing about being rich is my goal")5
___jobArgGoalPlan : GoalPlan_Statement
fulfils(___jobArgGoalPlan, GoalPlan_Desc)
claimText (___jobArgGoalPlan "In order to bring about being rich I
need to bring about having a job")
Cardiff University, 2016 Page 39
Semantic Web Argumentation • AIF-OWL
Relevant portion of the AIF ontology
EncouragedAction_Statement
EncouragedAction_Statement Statement
GoalPlan_Statement
GoalPlan_Statement Statement5
Goal_Statement
Goal_Statement Statement
I-node
I-node ≡ Statement
I-node Node10
I-node ¬ S-node
Inference
Inference ≡ RA-node
Inference ∃ fulfils Inference_Scheme
Inference ≥ 1 hasPremise Statement15
Inference Scheme_Application
Inference = hasConclusion (Scheme_Application Statement)
Inference_Scheme
Inference_Scheme Scheme ≥
1 hasPremise_Desc Statement_Description = hasConclusion_Desc20
(Scheme Statement_Description)
PracticalReasoning_Inference
PracticalReasoning_Inference ≡ Presumptive_Inference ∃ hasCon-
clusion EncouragedAction_Statement ∃ hasGoalPlan_Premise Goal-
Plan_Statement ∃ hasGoal_Premise Goal_Statement25
RA-node
RA-node ≡ Inference
RA-node S-node
S-node
S-node ≡ Scheme_Application30
S-node Node
S-node ¬ I-node
Cardiff University, 2016 Page 40
Semantic Web Argumentation • AIF-OWL
Scheme
Scheme Form
Scheme ¬ Statement_Description
Scheme_Application
Scheme_Application ≡ S-node5
Scheme_Application ∃ fulfils Scheme
Scheme_Application Thing
Scheme_Application ¬ Statement
Statement
Statement ≡ NegStatement10
Statement ≡ I-node
Statement Thing
Statement ∃ fulfils Statement_Description
Statement ¬ Scheme_Application
Statement_Description15
Statement_Description Form
Statement_Description ¬ Scheme
fulfils
∃ fulfils Thing Node
hasConclusion_Desc20
∃ hasConclusion_Desc Thing Inference_Scheme
hasGoalPlan_Premise
hasPremise
hasGoal_Premise
hasPremise25
claimText
∃ claimText DatatypeLiteral Statement
∀ claimText DatatypeString
Individuals of EncouragedAction_Desc
EncouragedAction_Desc : Statement_Description30
formDescription (EncouragedAction_Desc "A should be brought about")
Cardiff University, 2016 Page 41
Semantic Web Argumentation • AIF-OWL
Individuals of GoalPlan_Desc
GoalPlan_Desc : Statement_Description
formDescription (GoalPlan_Desc "Bringing about B is the way to bring
about A")
Individuals of Goal_Desc5
Goal_Desc : Statement_Description
formDescription (Goal_Desc "The goal is to bring about A")
Individuals of PracticalReasoning_Scheme
PracticalReasoning_Scheme : PresumptiveInference_Scheme
hasPremise_Desc(PracticalReasoning_Scheme, Goal_Desc)10
hasConclusion_Desc(PracticalReasoning_Scheme, EncouragedAction_Desc)
hasPremise_Desc(PracticalReasoning_Scheme, GoalPlan_Desc)
Cardiff University, 2016 Page 42
6 A novel synthesis: Collaborative
Intelligence Spaces (CISpaces)
Acknowledgement
This handout include material from a number of collaborators including
Alice Toniolo and Timothy J. Norman. Main reference: [Ton+15].5
6.1 Introduction
Problem
• Intelligence analysis is critical for making well-informed decisions
• Complexities in current military operations increase the amount of
information available to intelligence analysts10
CISpaces (Collaborative Intelligence Spaces)
• A toolkit developed to support collaborative intelligence analysis
• CISpaces aims to improve situational understanding of evolving sit-
uations
6.2 Intelligence Analysis15
Definition 50 ([DCD11]). The directed and coordinated acquisition and
analysis of information to assess capabilities, intent and opportunities for
exploitation by leaders at all levels. ♠
Fig. 6.1 summarises the Pirolli and Card Model [PC05].
Table 6.1 illustrates the problems of individual analysis and how col-20
laborative analysis can improve it.
Cardiff University, 2016 Page 43
CISpaces • Intelligence Analysis
External
Data
Sources
Presentation
Search
and Filter
Schematize
Build Case
Tell Story
Reevaluate
Search
for support
Search
for evidence
Search for
information
FORAGING LOOP
SENSE-MAKING LOOP
Structure
Effort
inf
Shoebox
Ev
Ev
EvEv Ev
Ev
Ev
Ev
Ev
Ev
Ev
Evidence File
Hyp1 Hyp2
Hypotheses
Pirolli & Card Model
Figure 6.1: The Pirolli & Card Model [PC05]
Individual analysis Collaborative analysis
• Scattered Information &
Noise
• Hard to make connections
• Missing Information
• Cognitive biases
• Missing Expertise
• More effective and reliable
• Brings together different
expertise, resources
• Prevent biases
Table 6.1: Individual vs. Collaborative Analysis
Cardiff University, 2016 Page 44
CISpaces • Intelligence Analysis
Harbour
Kish Farm
KISH
River
Water pipe
Aqueduct
KISHSHIRE
Kish Hall
Hotel
Illness among young and
elderly people in
Kishshire caused by
bacteria
Unidentified illness is
affecting the local
livestock in Kishshire,
the rural area of Kish
Figure 6.2: Initial information assigned to Joe
PEOPLE and
LIVESTOCK
illness
Water TEST
shows a
BACTERIA in the
water supply
Answer to POI:
"GER-MAN" seen
in Kish
Explosion in KISH
Hall Hotel
TIME
Tests on people/livestock POI for suspicious people
Figure 6.3: Further events happening in Kish
Example of Intelligence Analysis Process
Goal: discover potential threats in Kish
Analysts: Joe, Miles and Ella
What Joe knows is summarised by Figs. 6.2 and 6.3
Main critical points and possible conclusions during the analysis:5
• Causes of water contamination → waterborne/non-waterborne
bacteria;
• POI responsible for water contamination;
• Causes of hotel explosion.
Cardiff University, 2016 Page 45
CISpaces • Reasoning with Evidence
6.3 Reasoning with Evidence
• Identify what to believe happened from the claims constructed upon
information (the sensemaking process);
• Derive conclusions from data aggregated from explicitly requested
information (the crowdsourcing process);5
• Assess what is credible according to the history of data manipula-
tion (the provenance reasoning process).
6.4 Arguments for Sensemaking
Formal Linkage for Semantics Computation
A CISpace graph, WAT, can be transformed into a corresponding ASPIC-10
based argumentation theory. An edge in CISpaces is represented textu-
ally as →, an info/claim node is written pi and a link node is referred to
as type where type = {Pro,Con}. Then, [p1,...,pn → Pro → pφ] indicates
that the Pro-link has p1,..., pn as incoming nodes and an outgoing node
pφ.15
Definition 51. A WAT is a tuple 〈K, AS〉 such that AS= 〈L ,¯,R〉 is con-
structed as follows:
• L is a propositional logic language, and a node corresponds to a
proposition p ∈ L . The WAT set of propositions is Lw.
• The set R is formed by rules ri ∈ R corresponding to Pro links20
between nodes such that: [p1,..., pn → Pro → pφ] is converted to
ri : p1,..., pn ⇒ pφ
• The contrariness function between elements is defined as: i) if [p1 →
Con → p2] and [p2 → Con → p1], p1 and p2 are contradictory; ii)
[p1 → Con → p2] and p1 is the only premise of the Con link, then p125
is a contrary of p2; iii) if [p1, p3 → Con → p2] then a rule is added
such that p1 and p3 form an argument with conclusion ph against
p2, ri : p1, p3 ⇒ ph and ph is a contrary of p2. ♠
Definition 52. K is composed of propositions pi,
K = {pj, pi,...}, such that: i) let a set of rules r1,...,rn ∈ R indicate a cycle30
such that for all pi that are consequents of a rule r exists r containing pi
as antecedent, then pi ∈ K if pi is an info-node; ii) otherwise, pi ∈ K if pi
is not consequent of any rule r ∈ R. ♠
Cardiff University, 2016 Page 46
CISpaces • Arguments for Sensemaking
An Example of Argumentation Schemes for Intelligence
Analysis
Intelligence analysis broadly consists of three components: Activities
(Act) including actions performed by actors, and events happening in the
world; Entities (Et) including actors as individuals or groups, and objects5
such as resources; and Facts (Ft) including statements about the state of
the world regarding entities and activities.
A hypothesis in intelligence analysis is composed of activities and events
that show how the situation has evolved. The argument from cause to ef-
fect (ArgCE) forms the basis of these hypotheses. The scheme, adapted10
from [WRM08], is:
Argument from cause to effect
Premises:
• Typically, if C (either a fact Fti or an ac-
tivity Acti) occurs, then E (either a fact
Fti or an activity Acti) will occur
• In this case, C occurs
Conclusions:
In this case E will occur
Critical questions:
CQCE1 Is there evidence for C to occur?
CQCE1 Is there a general rule for C causing E ?
CQCE3 Is the relationship between C and E
causal?
CQCE4 Are there any exceptions to the causal
rule that prevent the effect E from occur-
ring?
CQCE5 Has C happened before E ?
CQCE6 Is there any other C that caused E ?
Formally:
rCE : rule(R,C ,E ),occur(C ),before(C ,E ),
ruletype(R,causal),noexceptions(R) ⇒ occur(E )15
Cardiff University, 2016 Page 47
CISpaces • Arguments for Provenance
WasInformedBy
Used
WasGeneratedBy
WasAssociatedWith
ActedOnBehalfOf
WasAttributedTo
WasDerivedFrom
Entity
Actor
Activity
Figure 6.4: PROV Data Model [MM13]
Lab Water
Testing
wasGeneratedBy
Used
wasAssociatedWith
pjID:Bacteria
contaminates
local water
Water
Sample
Generate
Requirement
Water
monitoring
Requirement
wasDerivedFrom
Used
wasGeneratedBy
wasInformedBy
Monitoring of
water supply
used
water
contamination
report
Report
generation
Used wasGeneratedBy
wasAssociatedWith
wasDerivedFrom
?a1Pattern Pg
Goal
NGO
lab
assistant
NGO
Chemical
Lab
PrimarySource
Time2014-11-13T08-16-45Z
Time2014-11-12T10-14-40Z
Time2014-11-14T05-14-10Z
?a2
?p
?ag
LEGEND
p-Agent
p-Entity
p-Activity
Node
Older p-elements Newer
Figure 6.5: Provenance of Joe’s information
6.5 Arguments for Provenance
Provenance can be used to annotate how, where, when and by whom some
information was produced [MM13]. Figure 6.4 depicts the core model for
representing provenance, and Figure 6.5 shows an example of provenance
for the pieces of information for analyst Joe w.r.t. the water contamination5
problem in Kish.
Patterns representing relevant provenance information that may war-
rant the credibility of a datum can be integrated into the analysis by ap-
plying the argument scheme for provenance (ArgPV) [Ton+14]:
Cardiff University, 2016 Page 48
CISpaces • Arguments for Provenance
Argument Scheme for Provenance
Premises:
• Given pj about activity Acti, entity Eti, or
fact Fti (ppv1)
• GP(pj) includes pattern Pm of p-entities
Apv, p-activities Ppv, p-agents Agpv in-
volved in producing pj (ppv2)
• GP(pj) infers that information pj is true
(ppv3)
Conclusions:
Acti/Eti/Fti in pj may plausibly be true
(ppvcn)
Critical questions:
CQPV1 Is pj consistent with other information?
CQPV2 Is pj supported by evidence?
CQPV3 Does GP(pj) contain p-elements that lead
us not to believe pj?
CQPV4 Is there any other p-element that should
have been included in GP(pj) to infer that
pj is credible?
Cardiff University, 2016 Page 49
7 Natural Language Interfaces
7.1 Experiments with Humans: Scenarios [CTO14]
Scenario 1.B
The weather forecasting service of the broadcasting com-
pany AAA says that it will rain tomorrow. Meanwhile, the5
forecast service of the broadcasting company BBB says that
it will be cloudy tomorrow but that it will not rain. It is also
well known that the forecasting service of BBB is more accu-
rate than the one of AAA.
Γ1.B = 〈S1.B,D1.B〉, where:10
S1.B D1.B
s1 : ⇒ sAAA
s2 : ⇒ sBBB
r1 : sAAA ∧ ∼ exAAA ⇒ rain
r2 : sBBB ∧ ∼ exBBB ⇒ ¬ rain
r3 : ∼ exaccuracy ⇒ r1 r2
Γ1.B gives rise to the following set of arguments: A1.B = {a1 = 〈s1,r1〉,a2 =
〈s2,r2〉,a3 = 〈r3〉}, where a2 A1.B-defeats a1. Therefore the set of justified
arguments (which is also the unique stable extensions) is {a2,a3}.
Scenario 1.E15
The weather forecasting service of the broadcasting com-
pany AAA says that it will rain tomorrow. Meanwhile, the
forecast service of the broadcasting company BBB says that
it will be cloudy tomorrow but that it will not rain. It is also
well known that the forecasting service of BBB is more accu-20
rate than the one of AAA. However, yesterday the trustwor-
thy newspaper CCC published an article which said that BBB
has cut the resources for its weather forecasting service in the
past months, thus making it less reliable than in the past.
Γ1.E = 〈S1.E,D1.E〉, where S1.E = S1.B ∪{s3 :⇒ sCCC}, and D1.E = D1.B ∪25
{r4 : sCCC ∧ ∼ exCCC ⇒ cut, r5 : cut ∧ ∼ excut ⇒ exaccuracy}.
Γ1.E gives rise to the following set of arguments A1.E = A1.B ∪ {a4 =
〈s3,r4,r5〉}. a4 is the unique justified argument, while the defensible ex-
tensions (which are also stable) are {a1,a4}, {a2,a4}.
Cardiff University, 2016 Page 50

Recomendados

Max Entropy por
Max EntropyMax Entropy
Max Entropyjianingy
2.3K vistas60 diapositivas
GLMs and extensions in R por
GLMs and extensions in RGLMs and extensions in R
GLMs and extensions in RBen Bolker
74K vistas25 diapositivas
Natural language processing with python and amharic syntax parse tree by dani... por
Natural language processing with python and amharic syntax parse tree by dani...Natural language processing with python and amharic syntax parse tree by dani...
Natural language processing with python and amharic syntax parse tree by dani...Daniel Adenew
6.1K vistas39 diapositivas
CS571: Language Models por
CS571: Language ModelsCS571: Language Models
CS571: Language ModelsJinho Choi
618 vistas19 diapositivas
Lecture: Word Sense Disambiguation por
Lecture: Word Sense DisambiguationLecture: Word Sense Disambiguation
Lecture: Word Sense DisambiguationMarina Santini
7.1K vistas62 diapositivas
D3.JS Tips & Tricks (export to svg, crossfilter, maps etc.) por
D3.JS Tips & Tricks (export to svg, crossfilter, maps etc.)D3.JS Tips & Tricks (export to svg, crossfilter, maps etc.)
D3.JS Tips & Tricks (export to svg, crossfilter, maps etc.)Oleksii Prohonnyi
3.8K vistas53 diapositivas

Más contenido relacionado

Destacado

Hina_Junejo_Abstract Reasoning Skills_Feedback Report por
Hina_Junejo_Abstract Reasoning Skills_Feedback  ReportHina_Junejo_Abstract Reasoning Skills_Feedback  Report
Hina_Junejo_Abstract Reasoning Skills_Feedback ReportHina Junejo FCIPD
505 vistas3 diapositivas
46 ejercicios resueltos de razonamiento abstracto u por
46 ejercicios resueltos de razonamiento abstracto u46 ejercicios resueltos de razonamiento abstracto u
46 ejercicios resueltos de razonamiento abstracto uRodson Lar
4.1K vistas10 diapositivas
How to Pass Non-Verbal Reasoning Tests: 11+ and Job Assessments - Golden Nuggets por
How to Pass Non-Verbal Reasoning Tests: 11+ and Job Assessments - Golden NuggetsHow to Pass Non-Verbal Reasoning Tests: 11+ and Job Assessments - Golden Nuggets
How to Pass Non-Verbal Reasoning Tests: 11+ and Job Assessments - Golden NuggetsHow2become Ltd
1.5K vistas12 diapositivas
Psychometric success numerical ability reasoning practice test 1 (1) - copy por
Psychometric success numerical ability   reasoning practice test 1 (1) - copyPsychometric success numerical ability   reasoning practice test 1 (1) - copy
Psychometric success numerical ability reasoning practice test 1 (1) - copyRoselito Baclay
963 vistas7 diapositivas
Abstract reasoning por
Abstract reasoningAbstract reasoning
Abstract reasoningjuddielangel
5.3K vistas19 diapositivas
Ability tests and Achievement tests por
Ability tests and Achievement testsAbility tests and Achievement tests
Ability tests and Achievement testsAyushi Bhati
3.1K vistas32 diapositivas

Destacado(11)

Hina_Junejo_Abstract Reasoning Skills_Feedback Report por Hina Junejo FCIPD
Hina_Junejo_Abstract Reasoning Skills_Feedback  ReportHina_Junejo_Abstract Reasoning Skills_Feedback  Report
Hina_Junejo_Abstract Reasoning Skills_Feedback Report
Hina Junejo FCIPD505 vistas
46 ejercicios resueltos de razonamiento abstracto u por Rodson Lar
46 ejercicios resueltos de razonamiento abstracto u46 ejercicios resueltos de razonamiento abstracto u
46 ejercicios resueltos de razonamiento abstracto u
Rodson Lar4.1K vistas
How to Pass Non-Verbal Reasoning Tests: 11+ and Job Assessments - Golden Nuggets por How2become Ltd
How to Pass Non-Verbal Reasoning Tests: 11+ and Job Assessments - Golden NuggetsHow to Pass Non-Verbal Reasoning Tests: 11+ and Job Assessments - Golden Nuggets
How to Pass Non-Verbal Reasoning Tests: 11+ and Job Assessments - Golden Nuggets
How2become Ltd1.5K vistas
Psychometric success numerical ability reasoning practice test 1 (1) - copy por Roselito Baclay
Psychometric success numerical ability   reasoning practice test 1 (1) - copyPsychometric success numerical ability   reasoning practice test 1 (1) - copy
Psychometric success numerical ability reasoning practice test 1 (1) - copy
Roselito Baclay963 vistas
Abstract reasoning por juddielangel
Abstract reasoningAbstract reasoning
Abstract reasoning
juddielangel5.3K vistas
Ability tests and Achievement tests por Ayushi Bhati
Ability tests and Achievement testsAbility tests and Achievement tests
Ability tests and Achievement tests
Ayushi Bhati3.1K vistas
Beginner’s Guide To The EPSO Abstract Reasoning Test por Lenke Szász
Beginner’s Guide To The EPSO Abstract Reasoning TestBeginner’s Guide To The EPSO Abstract Reasoning Test
Beginner’s Guide To The EPSO Abstract Reasoning Test
Lenke Szász593 vistas
Soft skills and interview presentation to students approved por Argon David
Soft skills and interview presentation to students approvedSoft skills and interview presentation to students approved
Soft skills and interview presentation to students approved
Argon David6.1K vistas

Similar a Handout for the course Abstract Argumentation and Interfaces to Argumentative Reasoning

Handout: Argumentation in Artificial Intelligence: From Theory to Practice por
Handout: Argumentation in Artificial Intelligence: From Theory to PracticeHandout: Argumentation in Artificial Intelligence: From Theory to Practice
Handout: Argumentation in Artificial Intelligence: From Theory to PracticeFederico Cerutti
706 vistas26 diapositivas
Argumentation in Artificial Intelligence: 20 years after Dung's work. Right m... por
Argumentation in Artificial Intelligence: 20 years after Dung's work. Right m...Argumentation in Artificial Intelligence: 20 years after Dung's work. Right m...
Argumentation in Artificial Intelligence: 20 years after Dung's work. Right m...Federico Cerutti
731 vistas72 diapositivas
Argumentation in Artificial Intelligence: 20 years after Dung's work. Left ma... por
Argumentation in Artificial Intelligence: 20 years after Dung's work. Left ma...Argumentation in Artificial Intelligence: 20 years after Dung's work. Left ma...
Argumentation in Artificial Intelligence: 20 years after Dung's work. Left ma...Federico Cerutti
422 vistas72 diapositivas
Cerutti -- TAFA2013 por
Cerutti -- TAFA2013Cerutti -- TAFA2013
Cerutti -- TAFA2013Federico Cerutti
733 vistas30 diapositivas
Argumentation in Artificial Intelligence por
Argumentation in Artificial IntelligenceArgumentation in Artificial Intelligence
Argumentation in Artificial IntelligenceFederico Cerutti
2.8K vistas220 diapositivas
Constructing and Evaluating Bipolar Weighted Argumentation Frameworks for Onl... por
Constructing and Evaluating Bipolar Weighted Argumentation Frameworks for Onl...Constructing and Evaluating Bipolar Weighted Argumentation Frameworks for Onl...
Constructing and Evaluating Bipolar Weighted Argumentation Frameworks for Onl...Andrea Pazienza
473 vistas23 diapositivas

Similar a Handout for the course Abstract Argumentation and Interfaces to Argumentative Reasoning(20)

Handout: Argumentation in Artificial Intelligence: From Theory to Practice por Federico Cerutti
Handout: Argumentation in Artificial Intelligence: From Theory to PracticeHandout: Argumentation in Artificial Intelligence: From Theory to Practice
Handout: Argumentation in Artificial Intelligence: From Theory to Practice
Federico Cerutti706 vistas
Argumentation in Artificial Intelligence: 20 years after Dung's work. Right m... por Federico Cerutti
Argumentation in Artificial Intelligence: 20 years after Dung's work. Right m...Argumentation in Artificial Intelligence: 20 years after Dung's work. Right m...
Argumentation in Artificial Intelligence: 20 years after Dung's work. Right m...
Federico Cerutti731 vistas
Argumentation in Artificial Intelligence: 20 years after Dung's work. Left ma... por Federico Cerutti
Argumentation in Artificial Intelligence: 20 years after Dung's work. Left ma...Argumentation in Artificial Intelligence: 20 years after Dung's work. Left ma...
Argumentation in Artificial Intelligence: 20 years after Dung's work. Left ma...
Federico Cerutti422 vistas
Argumentation in Artificial Intelligence por Federico Cerutti
Argumentation in Artificial IntelligenceArgumentation in Artificial Intelligence
Argumentation in Artificial Intelligence
Federico Cerutti2.8K vistas
Constructing and Evaluating Bipolar Weighted Argumentation Frameworks for Onl... por Andrea Pazienza
Constructing and Evaluating Bipolar Weighted Argumentation Frameworks for Onl...Constructing and Evaluating Bipolar Weighted Argumentation Frameworks for Onl...
Constructing and Evaluating Bipolar Weighted Argumentation Frameworks for Onl...
Andrea Pazienza473 vistas
Partial ordering in soft set context por Alexander Decker
Partial ordering in soft set contextPartial ordering in soft set context
Partial ordering in soft set context
Alexander Decker321 vistas
A Matrix Based Approach for Weighted Argumentation Frameworks por Carlo Taticchi
A Matrix Based Approach for Weighted Argumentation FrameworksA Matrix Based Approach for Weighted Argumentation Frameworks
A Matrix Based Approach for Weighted Argumentation Frameworks
Carlo Taticchi140 vistas
Cs6503 theory of computation november december 2015 be cse anna university q... por appasami
Cs6503 theory of computation november december 2015  be cse anna university q...Cs6503 theory of computation november december 2015  be cse anna university q...
Cs6503 theory of computation november december 2015 be cse anna university q...
appasami5.4K vistas
Introduction to machine learning por butest
Introduction to machine learningIntroduction to machine learning
Introduction to machine learning
butest735 vistas
Introduction to Real Analysis 4th Edition Bartle Solutions Manual por DawsonVeronica
Introduction to Real Analysis 4th Edition Bartle Solutions ManualIntroduction to Real Analysis 4th Edition Bartle Solutions Manual
Introduction to Real Analysis 4th Edition Bartle Solutions Manual
DawsonVeronica30K vistas

Más de Federico Cerutti

Security of Artificial Intelligence por
Security of Artificial IntelligenceSecurity of Artificial Intelligence
Security of Artificial IntelligenceFederico Cerutti
70 vistas170 diapositivas
Introduction to Evidential Neural Networks por
Introduction to Evidential Neural NetworksIntroduction to Evidential Neural Networks
Introduction to Evidential Neural NetworksFederico Cerutti
241 vistas38 diapositivas
Argumentation and Machine Learning: When the Whole is Greater than the Sum of... por
Argumentation and Machine Learning: When the Whole is Greater than the Sum of...Argumentation and Machine Learning: When the Whole is Greater than the Sum of...
Argumentation and Machine Learning: When the Whole is Greater than the Sum of...Federico Cerutti
1.4K vistas217 diapositivas
Human-Argumentation Experiment Pilot 2013: Technical Material por
Human-Argumentation Experiment Pilot 2013: Technical MaterialHuman-Argumentation Experiment Pilot 2013: Technical Material
Human-Argumentation Experiment Pilot 2013: Technical MaterialFederico Cerutti
112 vistas6 diapositivas
Probabilistic Logic Programming with Beta-Distributed Random Variables por
Probabilistic Logic Programming with Beta-Distributed Random VariablesProbabilistic Logic Programming with Beta-Distributed Random Variables
Probabilistic Logic Programming with Beta-Distributed Random VariablesFederico Cerutti
114 vistas27 diapositivas
Supporting Scientific Enquiry with Uncertain Sources por
Supporting Scientific Enquiry with Uncertain SourcesSupporting Scientific Enquiry with Uncertain Sources
Supporting Scientific Enquiry with Uncertain SourcesFederico Cerutti
200 vistas40 diapositivas

Más de Federico Cerutti(20)

Introduction to Evidential Neural Networks por Federico Cerutti
Introduction to Evidential Neural NetworksIntroduction to Evidential Neural Networks
Introduction to Evidential Neural Networks
Federico Cerutti241 vistas
Argumentation and Machine Learning: When the Whole is Greater than the Sum of... por Federico Cerutti
Argumentation and Machine Learning: When the Whole is Greater than the Sum of...Argumentation and Machine Learning: When the Whole is Greater than the Sum of...
Argumentation and Machine Learning: When the Whole is Greater than the Sum of...
Federico Cerutti1.4K vistas
Human-Argumentation Experiment Pilot 2013: Technical Material por Federico Cerutti
Human-Argumentation Experiment Pilot 2013: Technical MaterialHuman-Argumentation Experiment Pilot 2013: Technical Material
Human-Argumentation Experiment Pilot 2013: Technical Material
Federico Cerutti112 vistas
Probabilistic Logic Programming with Beta-Distributed Random Variables por Federico Cerutti
Probabilistic Logic Programming with Beta-Distributed Random VariablesProbabilistic Logic Programming with Beta-Distributed Random Variables
Probabilistic Logic Programming with Beta-Distributed Random Variables
Federico Cerutti114 vistas
Supporting Scientific Enquiry with Uncertain Sources por Federico Cerutti
Supporting Scientific Enquiry with Uncertain SourcesSupporting Scientific Enquiry with Uncertain Sources
Supporting Scientific Enquiry with Uncertain Sources
Federico Cerutti200 vistas
Introduction to Formal Argumentation Theory por Federico Cerutti
Introduction to Formal Argumentation TheoryIntroduction to Formal Argumentation Theory
Introduction to Formal Argumentation Theory
Federico Cerutti803 vistas
Argumentation in Artificial Intelligence: From Theory to Practice por Federico Cerutti
Argumentation in Artificial Intelligence: From Theory to PracticeArgumentation in Artificial Intelligence: From Theory to Practice
Argumentation in Artificial Intelligence: From Theory to Practice
Federico Cerutti1.6K vistas
Algorithm Selection for Preferred Extensions Enumeration por Federico Cerutti
Algorithm Selection for Preferred Extensions EnumerationAlgorithm Selection for Preferred Extensions Enumeration
Algorithm Selection for Preferred Extensions Enumeration
Federico Cerutti594 vistas
Formal Arguments, Preferences, and Natural Language Interfaces to Humans: an ... por Federico Cerutti
Formal Arguments, Preferences, and Natural Language Interfaces to Humans: an ...Formal Arguments, Preferences, and Natural Language Interfaces to Humans: an ...
Formal Arguments, Preferences, and Natural Language Interfaces to Humans: an ...
Federico Cerutti764 vistas
Argumentation Extensions Enumeration as a Constraint Satisfaction Problem: a ... por Federico Cerutti
Argumentation Extensions Enumeration as a Constraint Satisfaction Problem: a ...Argumentation Extensions Enumeration as a Constraint Satisfaction Problem: a ...
Argumentation Extensions Enumeration as a Constraint Satisfaction Problem: a ...
Federico Cerutti758 vistas
A SCC Recursive Meta-Algorithm for Computing Preferred Labellings in Abstract... por Federico Cerutti
A SCC Recursive Meta-Algorithm for Computing Preferred Labellings in Abstract...A SCC Recursive Meta-Algorithm for Computing Preferred Labellings in Abstract...
A SCC Recursive Meta-Algorithm for Computing Preferred Labellings in Abstract...
Federico Cerutti676 vistas
Cerutti-AT2013-Graphical Subjective Logic por Federico Cerutti
Cerutti-AT2013-Graphical Subjective LogicCerutti-AT2013-Graphical Subjective Logic
Cerutti-AT2013-Graphical Subjective Logic
Federico Cerutti795 vistas
Cerutti--Introduction to Argumentation (seminar @ University of Aberdeen) por Federico Cerutti
Cerutti--Introduction to Argumentation (seminar @ University of Aberdeen)Cerutti--Introduction to Argumentation (seminar @ University of Aberdeen)
Cerutti--Introduction to Argumentation (seminar @ University of Aberdeen)
Federico Cerutti1K vistas
Cerutti--Knowledge Representation and Reasoning (postgrad seminar @ Universit... por Federico Cerutti
Cerutti--Knowledge Representation and Reasoning (postgrad seminar @ Universit...Cerutti--Knowledge Representation and Reasoning (postgrad seminar @ Universit...
Cerutti--Knowledge Representation and Reasoning (postgrad seminar @ Universit...
Federico Cerutti2K vistas
Cerutti--Verification of Crypto Protocols (postgrad seminar @ University of B... por Federico Cerutti
Cerutti--Verification of Crypto Protocols (postgrad seminar @ University of B...Cerutti--Verification of Crypto Protocols (postgrad seminar @ University of B...
Cerutti--Verification of Crypto Protocols (postgrad seminar @ University of B...
Federico Cerutti1.1K vistas

Último

Community-led Open Access Publishing webinar.pptx por
Community-led Open Access Publishing webinar.pptxCommunity-led Open Access Publishing webinar.pptx
Community-led Open Access Publishing webinar.pptxJisc
91 vistas9 diapositivas
Class 10 English notes 23-24.pptx por
Class 10 English notes 23-24.pptxClass 10 English notes 23-24.pptx
Class 10 English notes 23-24.pptxTARIQ KHAN
125 vistas53 diapositivas
Narration ppt.pptx por
Narration  ppt.pptxNarration  ppt.pptx
Narration ppt.pptxTARIQ KHAN
131 vistas24 diapositivas
PLASMA PROTEIN (2).pptx por
PLASMA PROTEIN (2).pptxPLASMA PROTEIN (2).pptx
PLASMA PROTEIN (2).pptxMEGHANA C
66 vistas34 diapositivas
Sociology KS5 por
Sociology KS5Sociology KS5
Sociology KS5WestHatch
65 vistas23 diapositivas
JiscOAWeek_LAIR_slides_October2023.pptx por
JiscOAWeek_LAIR_slides_October2023.pptxJiscOAWeek_LAIR_slides_October2023.pptx
JiscOAWeek_LAIR_slides_October2023.pptxJisc
93 vistas8 diapositivas

Último(20)

Community-led Open Access Publishing webinar.pptx por Jisc
Community-led Open Access Publishing webinar.pptxCommunity-led Open Access Publishing webinar.pptx
Community-led Open Access Publishing webinar.pptx
Jisc91 vistas
Class 10 English notes 23-24.pptx por TARIQ KHAN
Class 10 English notes 23-24.pptxClass 10 English notes 23-24.pptx
Class 10 English notes 23-24.pptx
TARIQ KHAN125 vistas
Narration ppt.pptx por TARIQ KHAN
Narration  ppt.pptxNarration  ppt.pptx
Narration ppt.pptx
TARIQ KHAN131 vistas
PLASMA PROTEIN (2).pptx por MEGHANA C
PLASMA PROTEIN (2).pptxPLASMA PROTEIN (2).pptx
PLASMA PROTEIN (2).pptx
MEGHANA C66 vistas
Sociology KS5 por WestHatch
Sociology KS5Sociology KS5
Sociology KS5
WestHatch65 vistas
JiscOAWeek_LAIR_slides_October2023.pptx por Jisc
JiscOAWeek_LAIR_slides_October2023.pptxJiscOAWeek_LAIR_slides_October2023.pptx
JiscOAWeek_LAIR_slides_October2023.pptx
Jisc93 vistas
ISO/IEC 27001 and ISO/IEC 27005: Managing AI Risks Effectively por PECB
ISO/IEC 27001 and ISO/IEC 27005: Managing AI Risks EffectivelyISO/IEC 27001 and ISO/IEC 27005: Managing AI Risks Effectively
ISO/IEC 27001 and ISO/IEC 27005: Managing AI Risks Effectively
PECB 574 vistas
Narration lesson plan.docx por TARIQ KHAN
Narration lesson plan.docxNarration lesson plan.docx
Narration lesson plan.docx
TARIQ KHAN108 vistas
REPRESENTATION - GAUNTLET.pptx por iammrhaywood
REPRESENTATION - GAUNTLET.pptxREPRESENTATION - GAUNTLET.pptx
REPRESENTATION - GAUNTLET.pptx
iammrhaywood91 vistas
Education and Diversity.pptx por DrHafizKosar
Education and Diversity.pptxEducation and Diversity.pptx
Education and Diversity.pptx
DrHafizKosar135 vistas
Class 10 English lesson plans por TARIQ KHAN
Class 10 English  lesson plansClass 10 English  lesson plans
Class 10 English lesson plans
TARIQ KHAN280 vistas
AUDIENCE - BANDURA.pptx por iammrhaywood
AUDIENCE - BANDURA.pptxAUDIENCE - BANDURA.pptx
AUDIENCE - BANDURA.pptx
iammrhaywood77 vistas

Handout for the course Abstract Argumentation and Interfaces to Argumentative Reasoning

  • 1. Abstract Argumentation and Interfaces to Argumentative Reasoning Handouts Federico Cerutti September 2016
  • 2. Contents Contents 1 1 Dung’s AF 3 1.1 Principles for Extension-based Semantics: [BG07] . . . . . 3 1.2 Acceptability of Arguments [PV02; BG09a] . . . . . . . . . . 4 1.3 (Some) Semantics [Dun95] . . . . . . . . . . . . . . . . . . . . 5 1.4 Labelling-Based Semantics Representation [Cam06] . . . . 6 1.5 Skepticism Relationships [BG09b] . . . . . . . . . . . . . . . 9 1.6 Signatures [Dun+14] . . . . . . . . . . . . . . . . . . . . . . . 9 1.7 Decomposability and Transparancy [Bar+14] . . . . . . . . 12 1.8 Extension-based I/O Characterisation [GLW16] . . . . . . . 13 2 Implementations 14 2.1 Ad Hoc Procedures . . . . . . . . . . . . . . . . . . . . . . . . 14 2.2 Constraint Satisfaction Programming . . . . . . . . . . . . . 14 2.3 Answer Set Programming . . . . . . . . . . . . . . . . . . . . 15 2.4 Propositional Satisfiability Problems . . . . . . . . . . . . . 15 2.5 Second-order Solver [BJT16] . . . . . . . . . . . . . . . . . . 23 2.6 Which One? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3 Ranking-Based Semantics 28 3.1 The Categoriser Semantics [BH01] . . . . . . . . . . . . . . . 28 3.2 Properties for Ranking-Based Semantics [Bon+16] . . . . . 28 4 Argumentation Schemes 33 4.1 An example: Walton et al. ’s Argumentation Schemes for Practical Reasoning . . . . . . . . . . . . . . . . . . . . . . . . 33 4.2 AS and Dialogues . . . . . . . . . . . . . . . . . . . . . . . . . 34 5 Semantic Web Argumentation 38 5.1 AIF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 5.2 AIF-OWL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 6 CISpaces 43 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 6.2 Intelligence Analysis . . . . . . . . . . . . . . . . . . . . . . . 43 6.3 Reasoning with Evidence . . . . . . . . . . . . . . . . . . . . . 46 6.4 Arguments for Sensemaking . . . . . . . . . . . . . . . . . . 46 6.5 Arguments for Provenance . . . . . . . . . . . . . . . . . . . . 48 Cardiff University, 2016 Page 1
  • 3. 7 Natural Language Interfaces 50 7.1 Experiments with Humans: Scenarios [CTO14] . . . . . . . 50 7.2 Lessons From Argument Mining: [BR11] . . . . . . . . . . . 55 Bibliography 56 Cardiff University, 2016 Page 2
  • 4. 1 Dung’s Argumentation Framework Acknowledgement This handout include material from a number of collaborators including Pietro Baroni, Massimiliano Giacomin, Thomas Linsbichler, and Stefan5 Woltran. Definition 1 ([Dun95]). A Dung argumentation framework AF is a pair 〈A ,→ 〉 where A is a set of arguments, and → is a binary relation on A i.e. →⊆ A ×A . ♠ An argumentation framework has an obvious representation as a di-10 rected graph where the nodes are arguments and the edges are drawn from attacking to attacked arguments. The set of attackers of an argument a1 will be denoted as a− 1 {a2 : a2 → a1}, the set of arguments attacked by a1 will be denoted as a+ 1 {a2 : a1 → a2}. We also extend these notations to sets of arguments, i.e. given15 E ⊆ A , E− {a2 | ∃a1 ∈ E,a2 → a1} and E+ {a2 | ∃a1 ∈ E,a1 → a2}. With a little abuse of notation we define S → a ≡ ∃a ∈ S : a → b. Simi- larly, b → S ≡ ∃a ∈ S : b → a. Given Γ = 〈A ,→〉 and Γ = 〈A ,→ 〉, Γ∪Γ = 〈A ∪A ,→ ∪ → 〉. 1.1 Principles for Extension-based Semantics:20 [BG07] Definition 2.
  • 5. Given an argumentation framework AF = 〈A ,→ 〉, a set S ⊆ A is D-conflict-free, denoted as D-cf(S), if and only if a,b ∈ S such that a → b. A semantics σ satisfies the D-conflict-free principle if and only if ∀AF,∀E ∈ Eσ(AF) E is D-conflict-free . ♠25 Definition 3. Given an argumentation framework AF = 〈A ,→ 〉, an ar- gument a ∈ A is D-acceptable w.r.t. a set S ⊆ A if and only if ∀b ∈ A b → a ⇒ S → b. The function FAF : 2A → 2A which, given a set S ⊆ A , returns the set of the D-acceptable arguments w.r.t. S, is called the D-characteristic30 function of AF. ♠ Cardiff University, 2016 Page 3
  • 6. Dung’s AF • Acceptability of Arguments [PV02; BG09a] Definition 4. Given an argumentation framework AF = 〈A ,→ 〉, a set S ⊆ A is D-admissible (S ∈ AS (AF)) if and only if D-cf(S) and ∀a ∈ S a is D-acceptable w.r.t. S. The set of all the D-admissible sets of AF is denoted as AS (AF). ♠ Dσ = {AF|Eσ(AF) = }5 Definition 5.
  • 7. A semantics σ satisfies the D-admissibility principle if and only if ∀AF ∈ Dσ Eσ(AF) ⊆ AS (AF), namely ∀E ∈ Eσ(AF) it holds that: a ∈ E ⇒ (∀b ∈ A ,b → a ⇒ E → b). ♠ Definition 6. Given an argumentation framework AF = 〈A ,→ 〉, a ∈ A and S ⊆ A , we say that a is D-strongly-defended by S (denoted as D-sd(a,S)) iff ∀b ∈ A , b → a, ∃c ∈ S {a} : c → b and D-sd(c,S {a}). ♠ Definition 7.
  • 8. A semantics σ satisfies the D-strongly admissibility prin- ciple if and only if ∀AF ∈ Dσ, ∀E ∈ Eσ(AF) it holds that a ∈ E ⊃ D-sd(a,E) ♠ Definition 8.
  • 9. A semantics σ satisfies the D-reinstatement principle if and only if ∀AF ∈ Dσ, ∀E ∈ Eσ(AF) it holds that: (∀b ∈ A ,b → a ⇒ E → b) ⇒ a ∈ E. ♠ Definition 9.
  • 10. A set of extensions E is D-I-maximal if and only if ∀E1,E2 ∈ E , if E1 ⊆ E2 then E1 = E2. A semantics σ satisfies the D-I-maximality10 principle if and only if ∀AF ∈ Dσ Eσ(AF) is D-I-maximal. ♠ Definition 10. Given an argumentation framework AF = 〈A ,→ 〉, a non- empty set S ⊆ A is D-unattacked if and only if ∃a ∈ (A S) : a → S. The set of D-unattacked sets of AF is denoted as US (AF). ♠ Definition 11. Let AF = 〈A ,→ 〉 be an argumentation framework. The15 restriction of AF to S ⊆ A is the argumentation framework AF↓S = 〈S,→ ∩(S × S)〉. ♠ Definition 12.
  • 11. A semantics σ satisfies the D-directionality principle if and only if ∀AF = 〈A ,→ 〉,∀S ∈ US (AF),AE σ(AF,S) = Eσ(AF↓S), where AE σ(AF,S) {(E ∩ S) | E ∈ Eσ(AF)} ⊆ 2S . ♠20 1.2 Acceptability of Arguments [PV02; BG09a] Definition 13. Given a semantics σ and an argumentation framework 〈A ,→ 〉, an argument AF ∈ Dσ is: • skeptically justified iff ∀E ∈ Eσ(AF), a ∈ S; • credulously justified iff ∃E ∈ Eσ(AF), a ∈ S. ♠25 Cardiff University, 2016 Page 4
  • 12. Dung’s AF • (Some) Semantics [Dun95] Definition 14. Given a semantics σ and an argumentation framework 〈A ,→ 〉, an argument AF ∈ Dσ is: • justified iff it is skeptically justified; • defensible iff it is credulously justified but not skeptically justified; • overruled iff it is not credulously justified. ♠5 1.3 (Some) Semantics [Dun95] Lemma 1 (Dung’s Fundamental Lemma, [Dun95, Lemma 10]). Given an argumentation framework AF = 〈A ,→ 〉, let S ⊆ A be a D-admissible set of arguments, and a,b be arguments which are acceptable with respect to S. Then:10 1. S = S ∪{a} is D-admissible; and 2. b is D-acceptable with respect to S . ♣ Theorem 1 ([Dun95, Theorem 11]). Given an argumentation framework AF = 〈A ,→ 〉, the set of all D-admissible sets of 〈A ,→ 〉 form a complete partial order with respect to set inclusion. ♣15 Definition 15 (Complete Extension).
  • 13. Given an argumentation frame- work AF = 〈A ,→ 〉, S ⊆ A is a D-complete extension iff S is D-conflict-free and S = FAF(S). C O denotes the complete semantics. ♠ Definition 16 (Grounded Extension).
  • 14. Given an argumentation frame- work AF = 〈A ,→ 〉. The grounded extension of AF is the least complete20 extension of AF. GR denotes the grounded semantics. ♠ Definition 17 (Preferred Extension).
  • 15. Given an argumentation frame- work AF = 〈A ,→ 〉. A preferred extension of AF is a maximal (w.r.t. set inclusion) complete extension of AF. P R denotes the preferred seman- tics. ♠25 Definition 18. Given an argumentation framework AF = 〈A ,→ 〉 and S ⊆ A , S+ {a ∈ A | ∃b ∈ S ∧ b → a}. ♠ Definition 19 (Stable Extension).
  • 16. Given an argumentation framework AF = 〈A ,→ 〉. S ⊆ A is a stable extension of AF iff S is a preferred exten- sion and S+ = A S. S T denotes the stable semantics. ♠30 Cardiff University, 2016 Page 5
  • 17. Dung’s AF • Labelling-Based Semantics Representation [Cam06] C O GR P R S T D-conflict-free Yes Yes Yes Yes D-admissibility Yes Yes Yes Yes D-strongly admissibility No Yes No No D-reinstatement Yes Yes Yes Yes D-I-maximality No Yes Yes Yes D-directionality Yes Yes Yes No Table 1.1: Satisfaction of general properties by argumentation semantics [BG07; BCG11] S T P R C O GR Figure 1.1: Relationships among argumentation semantics 1.4 Labelling-Based Semantics Representation [Cam06] Definition 20. Let Γ = Γ be an argumentation framework. A labelling L ab ∈ L(Γ) is a complete labelling of Γ iff it satisfies the following condi- tions for any a1 ∈ A :5 • L ab(a1) = in ⇔ ∀a2 ∈ a− 1 L ab(a2) = out; • L ab(a1) = out ⇔ ∃a2 ∈ a− 1 : L ab(a2) = in. ♠ The grounded and preferred labelling can then be defined on the basis of complete labellings. Definition 21. Let Γ = Γ be an argumentation framework. A labelling10 L ab ∈ L(Γ) is the grounded labelling of Γ if it is the complete labelling of Γ minimizing the set of arguments labelled in, and it is a preferred labelling of Γ if it is a complete labelling of Γ maximizing the set of arguments labelled in. ♠ In order to show the connection between extensions and labellings, let15 us recall the definition of the function Ext2Lab, returning the labelling corresponding to a D-conflict-free set of arguments S. Definition 22. Given an AF Γ = Γ and a D-conflict-free set S ⊆ A , the cor- responding labelling Ext2Lab(S) is defined as Ext2Lab(S) ≡ L ab, where • L ab(a1) = in ⇔ a1 ∈ S20 • L ab(a1) = out ⇔ ∃ a2 ∈ S s.t. a2 → a1 Cardiff University, 2016 Page 6
  • 18. Dung’s AF • Labelling-Based Semantics Representation [Cam06] σ = C O σ = GR σ = P R σ = S T EXISTSσ trivial trivial trivial NP-c CAσ NP-c polynomial NP-c NP-c SAσ polynomial polynomial Π p 2 -c coNP-c VERσ polynomial polynomial coNP-c polynomial NEσ NP-c polynomial NP-c NP-c Table 1.2: Complexity of decision problems by argumentation semantics [DW09] • L ab(a1) = undec ⇔ a1 ∉ S ∧ a2 ∈ S s.t. a2 → a1 ♠ [Cam06] shows that there is a bijective correspondence between the complete, grounded, preferred extensions and the complete, grounded, preferred labellings, respectively. Proposition 1. Given an an AF Γ = Γ, L ab is a complete (grounded, pre-5 ferred) labelling of Γ if and only if there is a complete (grounded, preferred) extension S of Γ such that L ab = Ext2Lab(S). ♣ The set of complete labellings of Γ is denoted as LC O (Γ), the set of preferred labellings as LP R(Γ), while LGR(Γ) denotes the set including the grounded labelling.10 Remark 1.
  • 19. To exercise yourself, try Arg Teach [DS14] at http://www-argteach. doc.ic.ac.uk/ Cardiff University, 2016 Page 7
  • 20. Dung’s AF • Labelling-Based Semantics Representation [Cam06] Cardiff University, 2016 Page 8
  • 21. Dung’s AF • Skepticism Relationships [BG09b] GR C O P R GR C O P RS T Figure 1.2: S ⊕ relation for any argumentation framework (left) and for argumentation framework where stable extensions exist (right). 1.5 Skepticism Relationships [BG09b] E1 E E2 denotes that E1 is at least as skeptical as E2. Definition 23. Let E be a skepticism relation between sets of exten- sions. The skepticism relation between argumentation semantics S is such that for any argumentation semantics σ1 and σ2, σ1 S σ2 iff ∀AF ∈5 Dσ1 ∩Dσ2 , EAF(σ1) E EAF(σ2). ♠ Definition 24. Given two sets of extensions E1 and E2 of an argumenta- tion framework AF: • E1 E ∩+ E2 iff ∀E2 ∈ E2, ∃E1 ∈ E1: E1 ⊆ E2; • E1 E ∪+ E2 iff ∀E1 ∈ E1, ∃E2 ∈ E2: E1 ⊆ E2. ♠10 Lemma 2. Given two argumentation semantics σ1 and σ2, if for any argumentation framework AF EAF(σ1) ⊆ EAF(σ2), then σ1 E ∩+ σ2 and σ1 E ∪+ σ2 (σ1 E ⊕ σ2). ♣ 1.6 Signatures [Dun+14] Let A be a countably infinite domain of arguments, and15 AFA = {〈A ,→〉 | A ⊆ A,→⊆ A ×A }. Definition 25. The signature Σσ of a semantics σ is defined as Σσ = {σ(F) | F ∈ AFA} (i.e. the collection of all possible sets of extensions an AF can possess under a semantics). ♠20 Given S ⊆ 2A , ArgsS = S∈S S, PairsS = {〈a,b〉 | ∃S ∈ S s.t. {a,b} ⊆ S}. S is called an extension-set if ArgsS is finite. Definition 26. Let S ⊆ 2A . S is incomparable if ∀S,S ∈ S, S ⊆ S implies S = S . ♠ Cardiff University, 2016 Page 9
  • 22. Dung’s AF • Signatures [Dun+14] Definition 27. An extension-set S ⊆ 2A is tight if ∀S ∈ S and a ∈ ArgsS it holds that if S ∪ {a} ∈ S then there exists an b ∈ S such that 〈a,b〉 ∈ PairsS. ♠ Definition 28. S ⊆⊆ 2A is adm-closed if for each A,B ∈ S the following holds: if 〈a,b〉 ∈ PairsS for each a,b ∈ A ∪B, then also A ∪B ∈ S. ♠5 Proposition 2. For each F ∈ AFA: • S T (F) is incomparable and tight; • P R(F) is non-empty, incomparable and adm-closed. ♣ Theorem 2. The signatures for S T and P R are: • ΣS T = {S | S is incomparable and tight};10 • ΣP R = {S = | S is incomparable and adm-closed}. ♣ Cardiff University, 2016 Page 10
  • 23. Dung’s AF • Signatures [Dun+14] Consider S = { { a,d, e }, { b, c, e }, { a,b,d } } Cardiff University, 2016 Page 11
  • 24. Dung’s AF • Decomposability and Transparancy [Bar+14] 1.7 Decomposability and Transparancy [Bar+14] Definition 29. Given an argumentation framework AF = (A ,→), a labelling-based semantics σ associates with AF a subset of L(AF), de- noted as Lσ(AF). ♠ Definition 30. Given AF = (A ,→) and a set Args ⊆ A , the input of Args,5 denoted as Argsinp, is the set {B ∈ A Args | ∃A ∈ Args,(B, A) ∈→}, the con- ditioning relation of Args, denoted as ArgsR , is defined as → ∩(Argsinp × Args). ♠ Definition 31. An argumentation framework with input is a tuple (AF,I ,LI ,RI ), including an argumentation framework AF = (A ,→), a10 set of arguments I such that I ∩A = , a labelling LI ∈ LI and a rela- tion RI ⊆ I × A . A local function assigns to any argumentation frame- work with input a (possibly empty) set of labellings of AF, i.e. F(AF,I ,LI ,RI ) ∈ 2L(AF) . ♠ Definition 32. Given an argumentation framework with input15 (AF,I ,LI ,RI ), the standard argumentation framework w.r.t. (AF,I ,LI ,RI ) is defined as AF = (A ∪ I ,→ ∪R I ), where I = I ∪ {A | A ∈ out(LI )} and R I = RI ∪ {(A , A) | A ∈ out(LI )} ∪ {(A, A) | A ∈ undec(LI )}. ♠ Definition 33. Given a semantics σ, the canonical local function of σ20 (also called local function of σ) is defined as Fσ(AF,I ,LI ,RI ) = {Lab↓A | Lab ∈ Lσ(AF )}, where AF = (A ,→) and AF is the standard argumenta- tion framework w.r.t. (AF,I ,LI ,RI ). ♠ Definition 34. A semantics σ is complete-compatible iff the following conditions hold:25 1. For any argumentation framework AF = (A ,→), every labelling L ∈ Lσ(AF) satisfies the following conditions: • if A ∈ A is initial, then L(A) = in • if B ∈ A and there is an initial argument A which attacks B, then L(B) = out30 • if C ∈ A is self-defeating, and there are no attackers of C be- sides C itself, then L(C) = undec 2. for any set of arguments I and any labelling LI ∈ LI , the ar- gumentation framework AF = (I ,→ ), where I = I ∪ {A | A ∈ out(LI )} and → = {(A , A) | A ∈ out(LI )}∪{(A, A) | A ∈ undec(LI )},35 admits a (unique) labelling, i.e. |Lσ(AF )| = 1. ♠ Cardiff University, 2016 Page 12
  • 25. Dung’s AF • Extension-based I/O Characterisation [GLW16] Definition 35. A semantics σ is fully decomposable (or simply decom- posable) iff there is a local function F such that for every argumenta- tion framework AF = (A ,→) and every partition P = {P1,...Pn} of A , Lσ(AF) = U (P , AF,F) where U (P , AF,F) {LP1 ∪ ... ∪ LPn | LPi ∈ F(AF↓Pi ,Pi inp,( j=1···n,j=i LPj )↓ Pi inp,Pi R )}. ♠5 Definition 36. A complete-compatible semantics σ is top-down decom- posable iff for any argumentation framework AF = (A ,→) and any parti- tion P = {P1,...Pn} of A , it holds that Lσ(AF) ⊆ U (P , AF,Fσ). ♠ Definition 37. A complete-compatible semantics σ is bottom-up decom- posable iff for any argumentation framework AF = (A ,→) and any parti-10 tion P = {P1,...Pn} of A , it holds that Lσ(AF) ⊇ U (P , AF,Fσ). ♠ C O S T GR P R Full decomposability Yes Yes No No Top-down decomposability Yes Yes Yes Yes Bottom-up decomposability Yes Yes No No Table 1.3: Decomposability properties of argumentation semantics. 1.8 Extension-based I/O Characterisation [GLW16] Definition 38. Given input arguments I and output arguments O with I ∩O = , an I/O-gadget is an AF F = (A,R) such that I,O ⊆ A and I− F = . ♠15 Definition 39. Given an I/O-gadget F = (A,R) the injection of J ⊆ I to F is the AF (F, J) = (A ∪{z},R ∪{(z, i) | i ∈ (I J)}). ♠ Definition 40. An I/O-specification consists of two sets I,O ⊆ A and a total function f : 2I → 22O . ♠ Definition 41. The I/O-gadget F satisfies I/O-specification f under se-20 mantics σ iff ∀J ⊆ I : σ( (F, J))|O = f(J). ♠ Theorem 3. An I/O-specification f is satisfiable under σ iff S T : P R: ∀J ⊆ I : |f(J)| ≥ 1 C O: ∀J ⊆ I : |f(J)| ≥ 1∧ f(J) ∈ f(J) GR: ∀J ⊆ I : |f(J)| = 1 ♣ Cardiff University, 2016 Page 13
  • 26. 2 Implementations Acknowledgement This handout include material from a number of collaborators including Massimiliano Giacomin, Mauro Vallati, and Stefan Woltran. Comprehensive survey recently published in [Cha+15].5 2.1 Ad Hoc Procedures NAD-Alg [NDA12; NAD14] 2.2 Constraint Satisfaction Programming A Constraint Satisfaction Problem (CSP) P [BS12; RBW08] is a triple P = 〈X,D,C〉 such that:10 • X = 〈x1,...,xn〉 is a tuple of variables; • D = 〈D1,...,Dn〉 a tuple of domains such that ∀i,xi ∈ Di; • C = 〈C1,...,Ct〉 is a tuple of constraints, where ∀j,Cj = 〈RSj ,Sj〉, Sj ⊆ {xi|xi is a variable}, RSj ⊆ SD j × SD j where SD j = {Di|Di is a domain, and xi ∈ Sj}.15 A solution to the CSP P is A = 〈a1,...,an〉 where ∀i,ai ∈ Di and ∀j,RSj holds on the projection of A onto the scope Sj. If the set of solutions is empty, the CSP is unsatisfiable. Cardiff University, 2016 Page 14
  • 27. Implementations • Answer Set Programming CONArg2 [BS12] In [BS12], the authors propose a mapping from AFs to CSPs. Given an AF Γ, they first create a variable for each argument whose domain is always {0,1} — ∀ai ∈ A ,∃xi ∈ X such that Di = {0,1}. Subsequently, they describe constraints associated to different defi-5 nitions of Dung’s argumentation framework: for instance {a1,a2} ⊆ A is D-conflict-free iff ¬(x1 = 1∧ x2 = 1). 2.3 Answer Set Programming Answer Set Programming (ASP) [Fab13] is a declarative problem solving paradigm. In ASP, representation is done using a rule-based language,10 while reasoning is performed using implementations of general-purpose algorithms, referred to as ASP solvers. AspartixM [EGW10; Dvo+11] AspartixM [Dvo+11] expresses argumentation semantics in Answer Set Programming (ASP): a single program is used to encode a particular ar-15 gumentation semantics, and the instance of an argumentation framework is given as an input database. Tests for subset-maximality exploit the metasp optimisation frontend for the ASP-package gringo/claspD. Given an AF Γ, Aspartix encodes the requirements for a “semantics” (e.g. the D-conflict-free requirements) in an ASP program whose database20 considers: {arg(a) | a ∈ A }∪{defeat(a1,a2) | 〈a1,a2〉 ∈→} The following program fragment is thus used to check the D-conflict- freeness [Dvo+11]: πcf = { in(X) ← not out(X),arg(X); out(X) ← not in(X),arg(X); ← in(X),in(Y ),defeat(X,Y )}. 25 πS T = { in(X) ← not out(X),arg(X); out(X) ← not in(X),arg(X); ← in(X),in(Y ),defeat(X,Y ); defeated(X) ← in(Y ),defeat(Y , X); ← out(X),not defeated(X)}. 2.4 Propositional Satisfiability Problems In the propositional satisfiability problem (SAT) the goal is to determine whether a given Boolean formula is satisfiable. A variable assignment that satisfies a formula is a solution.30 Cardiff University, 2016 Page 15
  • 28. Implementations • Propositional Satisfiability Problems In SAT, formulae are commonly expressed in Conjunctive Normal Form (CNF). A formula in CNF is a conjunction of clauses, where clauses are disjunctions of literals, and a literal is either positive (a variable) or neg- ative (the negation of a variable). If at least one of the literals in a clause is true, then the clause is satisfied, and if all clauses in the formula are5 satisfied then the formula is satisfied and a solution has been found. PrefSAT [Cer+14b] Requirements for complete labelling as a CNF [Cer+14b]: for each argu- ment ai ∈ A , three propositional variables are considered: Ii (which is true iff L ab(ai) = in), Oi (which is true iff L ab(ai) = out), Ui (which is10 true iff L ab(ai) = undec). Given |A | = k and φ : {1,...,k} → A . i∈{1,...,k} (Ii ∨Oi ∨Ui)∧(¬Ii ∨¬Oi)∧(¬Ii ∨¬Ui)∧(¬Oi ∨¬Ui) (2.1) {i|φ(i)−= } Ii (2.2) {i|φ(i)−= } Ii ∨ {j|φ(j)→φ(i)} (¬Oj) (2.3) {i|φ(i)−= } {j|φ(j)→φ(i)} ¬Ii ∨Oj (2.4)15 {i|φ(i)−= } {j|φ(j)→φ(i)} ¬I j ∨Oi (2.5) {i|φ(i)−= } ¬Oi ∨ {j|φ(j)→φ(i)} I j (2.6) {i|φ(i)−= } {k|φ(k)→φ(i)} Ui ∨¬Uk ∨ {j|φ(j)→φ(i)} I j (2.7) {i|φ(i)−= } {j|φ(j)→φ(i)} (¬Ui ∨¬I j) ∧ ¬Ui ∨ {j|φ(j)→φ(i)} Uj (2.8) i∈{1,...k} Ii (2.9)20 Cardiff University, 2016 Page 16
  • 29. Implementations • Propositional Satisfiability Problems As noticed in [Cer+14b], the conjunction of the above formulae is re- dundant. However, the non-redundant CNFs are not equivalent from an empirical evaluation [Cer+14b]: the overall performance is significantly affected by the chosen configuration pair CNF encoding–SAT solver. Cardiff University, 2016 Page 17
  • 30. Implementations • Propositional Satisfiability Problems Algorithm 1 Enumerating the D-preferred extensions of an AF PrefSAT(Γ) 1: Input: Γ = Γ 2: Output: Ep ⊆ 2A 3: Ep := 4: cnf := ΠΓ 5: repeat 6: cnf df := cnf 7: pref cand := 8: repeat 9: lastcompf ound := SatS(cnf df ) 10: if lastcompf ound ! = ε then 11: pref cand := lastcompf ound 12: for a1 ∈ I-ARGS(lastcompf ound) do 13: cnf df := cnf df ∧ Iφ−1(a1) 14: end for 15: remaining := F ALSE 16: for a1 ∈ A I-ARGS(lastcompf ound) do 17: remaining := remaining ∨ Iφ−1(a1) 18: end for 19: cnf df := cnf df ∧ remaining 20: end if 21: until (lastcompf ound ! = ε∧I-ARGS(lastcompf ound) ! = A ) 22: if pref cand ! = then 23: Ep := Ep ∪{I-ARGS(pref cand)} 24: oppsolution := F ALSE 25: for a1 ∈ A I-ARGS(pref cand) do 26: oppsolution := oppsolution∨ Iφ−1(a1) 27: end for 28: cnf := cnf ∧ oppsolution 29: end if 30: until (pref cand ! = ) 31: if Ep = then 32: Ep = { } 33: end if 34: return Ep Cardiff University, 2016 Page 18
  • 31. Implementations • Propositional Satisfiability Problems Parallel-SCCp [Cer+14a; Cer+15] Based on the SCC-Recursiveness Schema [BGG05]. ab ef cdgh Cardiff University, 2016 Page 19
  • 32. Implementations • Propositional Satisfiability Problems Algorithm 1 Computing D-preferred labellings of an AF P-PREF(Γ) 1: Input: Γ = Γ 2: Output: Ep ∈ 2L(Γ) 3: return P-SCC-REC(Γ,A ) Algorithm 2 Greedy computation of base cases GREEDY(L,C) 1: Input: L = (L1 ,...,Ln := {Sn 1 ,...,Sn h }),C ⊆ A 2: Output: M = {...,(Si,Bi),...} 3: M := 4: for S ∈ n i=1 Li do in parallel 5: B := B-PR(Γ↓S,S ∩C) 6: M = M ∪{(S,B)} 7: end for 8: return M BOUNDCOND(Γ,Si,L ab) returns (O, I) where O = {a1 ∈ Si | ∃a2 ∈ S ∩ a− 1 : L ab(a2) = in} and I = {a1 ∈ Si | ∀ a2 ∈ S ∩ a− 1 ,L ab(a2) = out}, with S ≡ S1 ∪...∪ Si−1. Cardiff University, 2016 Page 20
  • 33. Implementations • Propositional Satisfiability Problems Algorithm 3 Determining the D-grounded labelling of an AF in a set C GROUNDED(Γ,C) 1: Input: Γ = Γ, C ⊆ A 2: Output: (L ab,U) : U ⊆ A ,L ab ∈ LA U 3: L ab := 4: U := A 5: repeat 6: initial f ound := ⊥ 7: for a1 ∈ C do 8: if {a2 ∈ U | a2 → a1} = then 9: initial f ound := 10: L ab := L ab ∪{(a1,in)} 11: U := U a1 12: C := C a1 13: for a2 ∈ (U ∩a+ 1 ) do 14: L ab := L ab ∪{(a2,out)} 15: U := U a2 16: C := C a2 17: end for 18: end if 19: end for 20: until (initial f ound) 21: return(L ab,U) Cardiff University, 2016 Page 21
  • 34. Implementations • Propositional Satisfiability Problems Algorithm 4 Computing D-preferred labellings of an AF in C P-SCC-REC(Γ,C) 1: Input: Γ = Γ, C ⊆ A 2: Output: Ep ∈ 2L(Γ) 3: (L ab,U) = GROUNDED(Γ,C) 4: Ep := {L ab} 5: Γ = Γ↓U 6: L:= (L1 := {S1 1,...,S1 k },...,Ln := {Sn 1 ,...,Sn h }) = SCCS-LIST(Γ) 7: M := {...,(Si,Bi),...} = GREEDY(L,C) 8: for l ∈ {1,...,n} do 9: El := {E S1 l := (),...,E Sk l := ()} 10: for S ∈ Ll do in parallel 11: for L ab ∈ Ep do in parallel 12: (O, I) := L-COND(Γ,S,Ll ,L ab) 13: if I = then 14: ES l [L ab] ={{(a1,out) | a1 ∈ O} ∪{(a1,undec) | a1 ∈ S O}} 15: else 16: if I = S then 17: ES l [L ab] = B where (S,B) ∈ M 18: else 19: if O = then 20: ES l [L ab] = B-PR(Γ↓S, I ∩C) 21: else 22: ES l [L ab]={{(a1,out) | a1 ∈ O}} 23: ES l [L ab] = ES l [L ab]⊗P-SCC-REC(Γ↓SO, I ∩C) 24: end if 25: end if 26: end if 27: end for 28: end for 29: for S ∈ Ll do 30: Ep := 31: for L ab ∈ Ep do in parallel 32: Ep = Ep ∪({L ab}⊗ ES l [L ab]) 33: end for 34: Ep := Ep 35: end for 36: end for 37: return Ep Cardiff University, 2016 Page 22
  • 35. Implementations • Second-order Solver [BJT16] 2.5 Second-order Solver [BJT16] http://research.ics.aalto.fi/software/sat/sat-to-sat/so2grounder. shtml Given a representation of an argumentation framework such as: • a(X) holds iff X is an argument;5 • r(X,Y ) holds iff X attacks Y ; then: • TCF = { N,M | r(N,M) ∧ s(N) ∧ s(M).} • TAD = ∀N | att(N) ⇐⇒ ( a(N) ∧ ∃M | r(M,N) ∧ s(M) ). ∀N | def (N) ⇐⇒ ( a(N) ∧ ∀M | r(M,N) =⇒ att(M) ). • TFP = {TAD. ∀N | s(N) ⇐⇒ def (N).}10 • TGR =    TFP. s ,att ,def : TFP[s/s ,def /def ,att/att ] ∧ ( ∀N | s (N) =⇒ s(N) ) ∧ ( ∃N | s(N)∧¬s (N) )    • TST = {TAD. ∀N | a(N) =⇒ ( s(N) ⇐⇒ ¬att(N) ).} • TCO = {TFP. TCF.} • TPR =    TCO. s ,att, ,def : TCO[s/s ,def /def ,att/att ] ∧ ( ∀N | s(N) =⇒ s (N) ) ∧ ∃N | s (N) ∧ ¬s(N).    The unary predicate s describes the extensions in the various seman-15 tics. 2.6 Which One? We need to be smart Holger H. Hoos, Invited Keynote Talk at ECAI2014 Features for AFs [VCG14; CGV14]20 Directed Graph (26 features) Cardiff University, 2016 Page 23
  • 36. Implementations • Which One? Structure: # vertices ( |A | ) # edges ( | → | ) # vertices / #edges ( |A |/| → | ) # edges / #vertices ( | → |/|A | ) density average Degree: stdev attackers max min # average stdev max SCCs: min Structure: # self-def # unattacked flow hierarchy Eulerian aperiodic CPU-time: . . . Cardiff University, 2016 Page 24
  • 37. Implementations • Which One? Undirected Graph (24 features) Structure: # edges # vertices / #edges # edges / #vertices density Degree: average stdev max min SCCs: # average stdev max min Structure: Transitivity 3-cycles: # average stdev max min CPU-time: . . . Average CPU-time, stdev, needed for extracting the features Direct Graph Features (DG) Class CPU-Time # feat Mean stdDev Graph Size 0.001 0.009 5 Degree 0.003 0.009 4 SCC 0.046 0.036 5 Structure 2.304 2.868 5 Undirect Graph Features (UG) Class CPU-Time # feat Mean stDev Graph Size 0.001 0.003 4 Degree 0.002 0.004 4 SCC 0.011 0.009 5 Structure 0.799 0.684 1 Triangles 0.787 0.671 5 5 Best Features for Runtime Prediction [CGV14] Determined by a greedy forward search based on the Correlation-based Feature Selection (CFS) attribute evaluator. Cardiff University, 2016 Page 25
  • 38. Implementations • Which One? Solver B1 B2 B3 AspartixM num. arguments density (DG) size max. SCC PrefSAT density (DG) num. SCCs aperiodicity NAD-Alg density (DG) CPU-time density CPU-time Eulerian SSCp density (DG) num. SCCs size max SCC Predicting the (log)Runtime [CGV14] RSME of Regression (Lower is better) B1 B2 B3 DG UG SCC All AspartixM 0.66 0.49 0.49 0.48 0.49 0.52 0.48 PrefSAT 1.39 0.93 0.93 0.89 0.92 0.94 0.89 NAD-Alg 1.48 1.47 1.47 0.77 0.57 1.61 0.55 SSCp 1.36 0.80 0.78 0.75 0.75 0.79 0.74 Log runtime is defined as n i=1 log10( ti )−log10( yi ) 2 n 5 Best Features for Classification [CGV14] Determined by a greedy forward search based on the Correlation-based Feature Selection (CFS) attribute evaluator. C-B1 C-B2 C-B3 num. arguments density (DG) min attackers Classification [CGV14]10 Classification (Higher is better) C −B1 C-B2 C-B3 DG UG SCC All Accuracy 48.5% 70.1% 69.9% 78.9% 79.0% 55.3% 79.5% Prec. AspartixM 35.0% 64.6% 63.7% 74.5% 74.9% 42.2% 76.1% Prec. PrefSAT 53.7% 67.8% 68.1% 79.6% 80.5% 60.4% 80.1% Prec. NAD-Alg 26.5% 69.2% 69.0% 81.7% 85.1% 35.3% 86.0% Prec. SSCp 54.3% 73.0% 72.7% 76.6% 76.8% 57.8% 77.2% Selecting the Best Algorithm [CGV14] Metric: Fastest (max. 1007) AspartixM 106 NAD-Alg 170 PrefSAT 278 SSCp 453 EPMs Regression 755 EPMs Classification 788 Cardiff University, 2016 Page 26
  • 39. Implementations • Which One? Metric: IPC (max. 1007) NAD-Alg 210.1 AspartixM 288.3 PrefSAT 546.7 SSCp 662.4 EPMs Regression 887.7 EPMs Classification 928.1 IPC score1 : for each AF, each system gets a score of T∗ /T, where T is its execution time and T∗ the best execution time among the compared systems, or a score of 0 if it fails in that case. Runtimes below 0.01 seconds get by default the maximal score of 1. The IPC score considers, at the5 same time, the runtimes and the solved instances 1 http://ipc.informatik.uni-freiburg.de/ . Cardiff University, 2016 Page 27
  • 40. 3 Ranking-Based Semantics 3.1 The Categoriser Semantics [BH01] Definition 42 ([BH01]). Let Γ = 〈A ,→〉 be an argumentation framework. The categoriser function Cat : A →]0,1] is defined as: Cat(a1) =    1 if a− 1 = 1 1+ a2∈a− 1 Cat(a2) otherwise 5 ♠ 3.2 Properties for Ranking-Based Semantics [Bon+16] Preliminary notions Definition 43. Let Γ = 〈A ,→〉 and a1,a2 ∈ A . A path from a2 to a1,10 noted P(a2,a1) is a sequence s = 〈b0,...,bn〉 of arguments such as b0 = a1, bn = a2, and ∀i < n,〈bi+1,bi〉 ∈ A . We denote by lP = n the length of P. A defender (resp. attacker) of a1 is an argument situated at the begin- ning of an even-length (resp. odd-length) path. We denote the multiset of defenders and attackers of a1 by R+ n {a2 | ∃P(a2,a1) with lP ∈ 2N} and15 R− n = {a2 | ∃P(a2,a1) with lP ∈ 2N + 1} respectively. The direct attack- ers of a1 are arguments in R− 1 (a1) = a− 1 . An argument a1 is defended if R+ 2 (a1) = {a− 1 }− = . A defence root (resp. attack root) is a non-attacked defender (resp. attacker). We denote the mulitset of defence roots and attacks roots of a120 by BR+ n (a1) = {a2 ∈ R+ n (a1) | a− 2 = } and BR− n (a1) = {a2 ∈ R− n (a1) | a− 2 = } respectively. A path from a2 to a1 is a defence branch (resp. attack branch) if a2 is a defence (resp. attack) root of a1. Let us note BR+ (a1) = n BR+ n (a1) and BR− (a1) = n BR− n (a1). ♠ Definition 44. A ranking-based semantics σ associates to any argumen-25 tation framework Γ = 〈A ,→〉 is a ranking σ Γ on A , where σ Γ is a preorder (a reflexive and transitive relation) on A . a1 σ Γ a2 means that a1 is at least as acceptable as a2. a1 σ Γ a2 iff a1 σ Γ a2 and a2 σ Γ a1. ♠ Definition 45. A lexicographical order between two vectors of real num- ber V = 〈V1,...,Vn〉 and V = 〈V1,...,Vn〉, is defined as V lex V iff ∃i ≤ n30 s.t. Vi ≥ Vi and ∀j < i, Vj = Vj . ♠ Cardiff University, 2016 Page 28
  • 41. Ranking-Based Semantics • Properties for Ranking- Based Semantics [Bon+16] Definition 46. An isomorphism γ between two argumentation frame- works Γ = 〈A ,→〉 and Γ = 〈A ,→ 〉 is a bijective function γ : A → A such that ∀a24,a25 ∈ A , 〈a24,a25〉 ∈→ iff 〈γ(a24),γ(a25)〉 ∈→ . With a slight abuse of notation, we will note Γ = γ(Γ). ♠ Definition 47 ([AB13]). Let ≥S be a ranking on a set of argument A .5 For any S1,S2 ⊆ A , S1 ≥S S2 is a group comparison iff there exists an injective mapping f from S2 to S1 such that ∀a1 ∈ S2, f (a1) a1. An S1 >S S2 is a strict group comparison iff S1 ≥S S2 and (|S2| < |S1| or ∃a1 ∈ S2, f (a1) a1). ♠ Definition 48. Let Γ = 〈A ,→〉 and a1 ∈ A . The defence of a1 is simple iff10 every defender of a1 attacks exactly one direct attacker of a2. The defence of a1 is distributed iff every direct attacker of a1 is attacked by at most one argument. ♠ Definition 49. Let Γ = 〈A ,→〉, a1 ∈ A . The defence branch added to a1 is P+(a1) = 〈A ,→ 〉, with A = {b0,...,bn},n ∈ 2N,b0 = a1,A ∩ A = {a1},15 and → = {〈bi,bi−1〉 | i ≤ n}. The attack branch added to a1, denoted P−(a1) is defined similarly except that the sequence is of odd length (i.e. n = 2N+1). ♠ Properties Given a ranking-based semantics σ, Γ = 〈A ,→〉, ∀a1,a2 ∈ A :20 Abstraction (Abs) [AB13]. The ranking on A should be defined only on the basis of the attacks between arguments. Let Γ = 〈A ,→ 〉. For any isomorphism γ s.t. Γ = γ(Γ), a1 σ Γ a2 iff γ(a1) σ Γ γ(a2). Independence (In) [MT08; AB13]. The ranking between two argu-25 ments a1 and a2 should be independent of any argument that is neither connected to a1 nor to a2. ∀Γ = 〈A ,→ 〉 ∈ cc(Γ),1 ∀a1,a2 ∈ A , a1 σ Γ a2 ⇒ a1 σ Γ a2. Void Precedence (VP) [CL05; MT08; AB13]. A non-attacked argu- ment is ranked strictly higher than any attacked argument.30 a− 1 = and a− 2 = ⇒ a1 σ a2. Self-Contradiction (SC) [MT08]. A self-attacking argument is ranked lower than any non self-attacking argument. 〈a1,a1〉 =→ and 〈a2,a2〉 ∈→ ⇒ a1 σ a2. 1cc(Γ) denotes the set of connected components of an AF Γ. Cardiff University, 2016 Page 29
  • 42. Ranking-Based Semantics • Properties for Ranking- Based Semantics [Bon+16] Cardinality Precedence (CP) [AB13]. The greater the number of di- rect attackers for an argument, the weaker the level of acceptability of this argument. |a− 1 | < |a− 2 | ⇒ a1 σ a2. Quality Precedence (QP) [AB13]. The greater the acceptability of5 one direct attacker for an argument, the weaker the level of acceptability of this argument. ∃a3 ∈ a− 2 s.t. ∀a4 ∈ a− 1 , a3 σ a4 ⇒ a1 σ a2. Counter-Transitivity (CT) [AB13]. If the direct attackers of a2 are at least as numerous and acceptable as those of a1, then a1 is at least as10 acceptable as a2. a− 2 ≥S a− 1 ⇒ a1 σ a2. Strict Counter-Transitivity (SCT) [AB13]. If CT is satisfied and ei- ther the direct attackers of a2 are strictly more numerous or acceptable than those of a1, then a1 is strictly more acceptable than a2.15 a− 2 >S a− 1 ⇒ a1 σ a2. Defence Precedence (DP) [AB13]. For two arguments with the same number of direct attackers, a defended argument is ranked higher than a non-defended argument. |a− 1 | = |a− 2 |, {a− 1 }− = and {a− 2 }− = ⇒ a1 σ a2.20 Distributed-Defence Precedence (DDP) [AB13]. The best defense is when each defender attacks a distinct attacker. |a− 1 | = |a− 2 | and {a− 1 }− = {a− 2 }− , if the defence of a1 is simple and distributed and the defence of a2 is simple but not distributed, then a1 σ a2. Strict addition of Defence Branch (⊕DB) [CL05]. Adding a defence25 branch to any argument improves its ranking. Given γ an isomorphism. If Γ∗ = Γ∪γ(Γ)∪ P+(γ(a1)), then γ(a1) σ Γ+ a1. Increase of Defence Branch (↑DB) [CL05]. Increasing the length of a defence branch of an argument degrades its ranking. Given γ an isomorphism. If a2 ∈ BR+ (a1), a2 ∉ BR− (a1) and Γ∗ = Γ∪γ(Γ)∪30 P+(γ(a2)), then a1 σ Γ∗ γ(a1). Addition of Defence Branch (+DB) [CL05]. Adding a defence branch to an attached argument improves its ranking. Given γ an isomorphism. If Γ∗ = Γ ∪ γ(Γ) ∪ P+(γ(a1)) and |a− 1 | = 0, then γ(a1) σ Γ+ a1.35 Cardiff University, 2016 Page 30
  • 43. Ranking-Based Semantics • Properties for Ranking- Based Semantics [Bon+16] Increase of Attack Branch (↑AB) [CL05]. Increasing the length of an attack branch of an argument improves its ranking. Given γ an isomorphism. If a2 ∈ BR− (a1), a2 ∉ BR+ (a1) and Γ∗ = Γ∪γ(Γ)∪ P+(γ(a2)), then γ(a1) σ Γ∗ a1. Addition of Attack Branch (+AB) [CL05]. Adding an attack branch5 to any argument degrades its ranking. Given γ an isomorphism. If Γ∗ = Γ∪γ(Γ)∪ P−(γ(a1)), then a1 σ Γ∗ γ(a1). Total (Tot) [Bon+16]. All pairs of arguments can be compared. a1 σ a2 or a2 σ a1. Non-attacked Equivalence (NaE) [Bon+16]. All the non-attacked10 argument have the same rank. a− 1 = and a− 2 = ⇒ a1 σ a2. Attack vs Full Defence (AvsFD) [Bon+16]. An argument without any attack branch is ranked higher than an argument only attacked by one non-attacked argument.15 Γ is acyclic, |BR− (a1)| = 0, |a− 2 | = 1, and |{a− 2 }− | = 0 ⇒ a1 σ a2. CP incompatible with QP [AB13] CP incompatible with AvsFD [Bon+16] CP incompatible with +DB [Bon+16] VP incompatible with ⊕DB [Bon+16] Table 3.1: Incompatible properties SCT implies VP [AB13] CT implies DP [AB13] SCT implies CT [Bon+16] CT implies NaE [Bon+16] ⊕DB implies +DB [Bon+16] Table 3.2: Dependencies among properties Cardiff University, 2016 Page 31
  • 44. Ranking-Based Semantics • Properties for Ranking- Based Semantics [Bon+16] Property Yes/No Comment Abs Yes In Yes VP Yes Implied by SCT DP Yes Implied by CT CT Yes Implied by SCT SCT Yes CP No QP No DDP No SC No ⊕DB No Incompatible with VP +AB Yes +DB No ↑AB Yes ↑DB Yes Tot Yes NaE Yes Implied by CT AvsFD No Table 3.3: Properties satisfied by Cat [BH01] Cardiff University, 2016 Page 32
  • 45. 4 Argumentation Schemes Argumentation schemes [WRM08] are reasoning patterns which generate arguments: • deductive/inductive inferences that represent forms of common types of arguments used in everyday discourse, and in special contexts5 (e.g. legal argumentation); • neither deductive nor inductive, but defeasible, presumptive, or ab- ductive. Moreover, an argument satisfying a pattern may not be very strong by itself, but may be strong enough to provide evidence to warrant rational10 acceptance of its conclusion, given that it premises are acceptable. According to Toulmin [Tou58] such an argument can be plausible and thus accepted after a balance of considerations in an investigation or dis- cussion moved forward as new evidence is being collected. The investiga- tion can then move ahead, even under conditions of uncertainty and lack15 of knowledge, using the conclusions tentatively accepted. 4.1 An example: Walton et al. ’s Argumentation Schemes for Practical Reasoning Suppose I am deliberating with my spouse on what to do with our pension investment fund — whether to buy stocks,20 bonds or some other type of investments. We consult with a financial adviser, and expert source of information who can tell us what is happening in the stock market, and so forth at the present time [Wal97]. Premises for practical inference:25 1. states that an agent (“I” or “my”) has a particular goal; 2. states that an agent has a particular goal. 〈S0,S1,...,Sn〉 represents a sequence of states of affairs that can be ordered temporally from earlier to latter. A state of affairs is meant to be like a statement, but one describing some event or occurrence that can30 be brought about by an agent. It may be a human action, or it may be a natural event. Cardiff University, 2016 Page 33
  • 46. Argumentation Schemes • AS and Dialogues Practical Inference Premises: Goal Premise Bringing about Sn is my goal Means Premise In order to bring about Sn, I need to bring about Si Conclusions: Therefore, I need to bring about Si. Critical questions: Other-Means Question Are there alternative possible actions to bring about Si that could also lead to the goal? Best-Means Question Is Si the best (or most favourable) of the alternatives? Other-Goals Question Do I have goals other than Si whose achievement is preferable and that should have priority? Possibility Question Is it possible to bring about Si in the given circumstances? Side Effects Question Would bringing about Si have known bad consequences that ought to be taken into account? 4.2 AS and Dialogues Dialogue for practical reasoning: all moves (propose, prefer, justify) are co- ordinated in a formal deliberation dialogue that has eight stages [HMP01]. 1. Opening of the deliberation dialogue, and the raising of a governing question about what is to be done.5 2. Discussion of: (a) the governing question; (b) desirable goals; (c) any constraints on the possible actions which may be considered; (d) perspectives by which proposals may be evaluated; and (e) any premises (facts) relevant to this evaluation. 3. Suggesting of possible action-options appropriate to the governing10 question. 4. Commenting on proposals from various perspectives. Cardiff University, 2016 Page 34
  • 47. Argumentation Schemes • AS and Dialogues 5. Revising of: (a) the governing question, (b) goals, (c) constraints, (d) perspectives, and/or (e) action-options in the light of the comments presented; and the undertaking of any information-gathering or fact-checking required for resolution. 6. Recommending an option for action, and acceptance or non-accept-5 ance of this recommendation by each participant. 7. Confirming acceptance of a recommended option by each partici- pant. 8. Closing of the deliberation dialogue. Proposals are initially made at stage 3, and then evaluated at stages10 4, 5 and 6. Especially at stage 5, much argumentation taking the form of practi- cal reasoning would seem to be involved. As discussed in [Wal06], there are three dialectical adequacy condi- tions for defining the speech act of making a proposal.15 The Proponent’s Requirement (Condition 1). The proponent puts forward a statement that describes an action and says that both proponent and respondent (or the respondent group) should carry out this action. The proponent is committed to carrying out that action: the state-20 ment has the logical form of the conclusion of a practical inference, and also expresses an attitude toward that statement. The Respondent’s Requirement (Condition 2). The statement is put forward with the aim of offering reasons of a kind that will lead the respondent to become committed to it.25 The Governing Question Requirement (Condition 3). The job of the proponent is to overcame doubts or conflicts of opinions, while the job of the respondent is to express them. Thus the role of the respondent is to ask questions that cast the prudential reasonable- ness of the action in the statement into doubt, and to mount attacks30 (counter-arguments and rebuttals) against it. Condition 3 relates to the global structure of the dialogue, whereas conditions 1 and 2 are more localised to the part where the proposal was made. Condition 3 relates to the global burden of proof [Wal14] and the roles of the two parties in the dialogue as a whole.35 Speech acts [MP02], like making a proposal, are seen as types of moves in a dialogue that are governed by rules. Three basic character- istics of any type of move that have to be defined: Cardiff University, 2016 Page 35
  • 48. Argumentation Schemes • AS and Dialogues 1. pre-conditions of the move; 2. the conditions defining the move itself; 3. the post-conditions that state the result of the move. Preconditions • At least two agents (proponent and opponent);5 • A governing question; • Set of statements (propositions); • The proponent proposes the proposition to the respondent if and only if: 1. there is a set of premises that the proponent is committed to,10 and fit the premises of the argumentation scheme for practical reasoning; 2. the proponent is advocating these premises, that is, he is mak- ing a claim that they are true or applicable in the case at issue; 3. there is an inference from these premises fitting the argumen-15 tation scheme for practical reasoning; and 4. the proposition is the conclusion of the inference. The Defining Conditions The central defining condition sets out the conditions defining the struc- ture of the move of making a proposal.20 The Goal Statement: We have a goal G. The Means Statement: Bringing about p is necessary (or suffi- cient) for us to bring about G. Then the inference follows. The Proposal Statement: We should (practically ought to) bring25 about p. Cardiff University, 2016 Page 36
  • 49. Argumentation Schemes • AS and Dialogues Proposal Statement in form of AS Premises: Goal Statement We have a goal G. The Means Statement Bringing about p is necessary (or suffi- cient) for us to bring about G. Conclusions: We should (practically ought to) bring about p. The Post-Conditions The central post-condition is the response condition. The proposal must be open to critical questioning by opponent. The proponent should be open to answering doubts and objections correspond-5 ing to any one of the five critical questions for practical reasoning; as well as to counter-proposals, and is in charge of giving reasons why her pro- posal is better than the alternatives. The response condition set by these critical questions helps to explain how and why the maker of a proposal needs to be open to questioning and10 to requests for justification. Cardiff University, 2016 Page 37
  • 50. 5 A Semantic-Web View of Argumentation Acknowledgement This handout include material from a number of collaborators including Chris Reed. An overview can also be find at [Bex+13].5 5.1 The Argument Interchange Format [Rah+11] Node Graph (argument network) has-a Information Node (I-Node) is-a Scheme Node S-Node has-a Edge is-a Rule of inference application node (RA-Node) Conflict application node (CA-Node) Preference application node (PA-Node) Derived concept application node (e.g. defeat) is-a ... ContextScheme Conflict scheme contained-in Rule of inference scheme Logical inference scheme Presumptive inference scheme ... is-a Logical conflict scheme is-a ... Preference scheme Logical preference scheme is-a ... Presumptive preference scheme is-a uses uses uses Figure 5.1: Original AIF Ontology [Che+06; Rah+11] 5.2 An Ontology of Arguments [Rah+11] Please download Protégé from http://protege.stanford.edu/ and the AIF OWL version from http://www.arg.dundee.ac.uk/wp-content/ uploads/AIF.owl10 Representation of the argument described in Figure 5.2 ___jobArg : PracticalReasoning_Inference fulfils(___jobArg, PracticalReasoning_Scheme) hasGoalPlan_Premise(___jobArg, ___jobArgGoalPlan) hasConclusion(___jobArg, ___jobArgConclusion)15 hasGoal_Premise(___jobArg, ___jobArgGoal) ___jobArgConclusion : EncouragedAction_Statement fulfils(___jobArgConclusion, EncouragedAction_Desc) Cardiff University, 2016 Page 38
  • 51. Semantic Web Argumentation • AIF-OWL Practical Inference Bringing about is my goal Sn Si In order to bring about I need to bring about Sn Therefore I need to bring about Si hasConcDeschasPremiseDesc hasPremiseDesc Bringing about being rich is my goal In order to bring about being rich I need to bring about having a job fulfilsPremiseDesc fulfilsPremiseDesc fulfilsScheme supports supports Therefore I need to bring about having a job hasConclusion fulfils Figure 5.2: An argument network linking instances of argument and scheme components Symmetric attack r → p r pMP2 A1 A2 p → q p qMP1 neg1 Undercut attack r MP2 A3 A2 s → v s vMP1 cut1 p r → p Figure 5.3: Examples of conflicts [Rah+11, Fig. 2] claimText (___jobArgConclusion "Therefore I need to bring about hav- ing a job") ___jobArgGoal : Goal_Statement fulfils(___jobArgGoal, Goal_Desc) claimText (___jobArgGoal "Bringing about being rich is my goal")5 ___jobArgGoalPlan : GoalPlan_Statement fulfils(___jobArgGoalPlan, GoalPlan_Desc) claimText (___jobArgGoalPlan "In order to bring about being rich I need to bring about having a job") Cardiff University, 2016 Page 39
  • 52. Semantic Web Argumentation • AIF-OWL Relevant portion of the AIF ontology EncouragedAction_Statement EncouragedAction_Statement Statement GoalPlan_Statement GoalPlan_Statement Statement5 Goal_Statement Goal_Statement Statement I-node I-node ≡ Statement I-node Node10 I-node ¬ S-node Inference Inference ≡ RA-node Inference ∃ fulfils Inference_Scheme Inference ≥ 1 hasPremise Statement15 Inference Scheme_Application Inference = hasConclusion (Scheme_Application Statement) Inference_Scheme Inference_Scheme Scheme ≥ 1 hasPremise_Desc Statement_Description = hasConclusion_Desc20 (Scheme Statement_Description) PracticalReasoning_Inference PracticalReasoning_Inference ≡ Presumptive_Inference ∃ hasCon- clusion EncouragedAction_Statement ∃ hasGoalPlan_Premise Goal- Plan_Statement ∃ hasGoal_Premise Goal_Statement25 RA-node RA-node ≡ Inference RA-node S-node S-node S-node ≡ Scheme_Application30 S-node Node S-node ¬ I-node Cardiff University, 2016 Page 40
  • 53. Semantic Web Argumentation • AIF-OWL Scheme Scheme Form Scheme ¬ Statement_Description Scheme_Application Scheme_Application ≡ S-node5 Scheme_Application ∃ fulfils Scheme Scheme_Application Thing Scheme_Application ¬ Statement Statement Statement ≡ NegStatement10 Statement ≡ I-node Statement Thing Statement ∃ fulfils Statement_Description Statement ¬ Scheme_Application Statement_Description15 Statement_Description Form Statement_Description ¬ Scheme fulfils ∃ fulfils Thing Node hasConclusion_Desc20 ∃ hasConclusion_Desc Thing Inference_Scheme hasGoalPlan_Premise hasPremise hasGoal_Premise hasPremise25 claimText ∃ claimText DatatypeLiteral Statement ∀ claimText DatatypeString Individuals of EncouragedAction_Desc EncouragedAction_Desc : Statement_Description30 formDescription (EncouragedAction_Desc "A should be brought about") Cardiff University, 2016 Page 41
  • 54. Semantic Web Argumentation • AIF-OWL Individuals of GoalPlan_Desc GoalPlan_Desc : Statement_Description formDescription (GoalPlan_Desc "Bringing about B is the way to bring about A") Individuals of Goal_Desc5 Goal_Desc : Statement_Description formDescription (Goal_Desc "The goal is to bring about A") Individuals of PracticalReasoning_Scheme PracticalReasoning_Scheme : PresumptiveInference_Scheme hasPremise_Desc(PracticalReasoning_Scheme, Goal_Desc)10 hasConclusion_Desc(PracticalReasoning_Scheme, EncouragedAction_Desc) hasPremise_Desc(PracticalReasoning_Scheme, GoalPlan_Desc) Cardiff University, 2016 Page 42
  • 55. 6 A novel synthesis: Collaborative Intelligence Spaces (CISpaces) Acknowledgement This handout include material from a number of collaborators including Alice Toniolo and Timothy J. Norman. Main reference: [Ton+15].5 6.1 Introduction Problem • Intelligence analysis is critical for making well-informed decisions • Complexities in current military operations increase the amount of information available to intelligence analysts10 CISpaces (Collaborative Intelligence Spaces) • A toolkit developed to support collaborative intelligence analysis • CISpaces aims to improve situational understanding of evolving sit- uations 6.2 Intelligence Analysis15 Definition 50 ([DCD11]). The directed and coordinated acquisition and analysis of information to assess capabilities, intent and opportunities for exploitation by leaders at all levels. ♠ Fig. 6.1 summarises the Pirolli and Card Model [PC05]. Table 6.1 illustrates the problems of individual analysis and how col-20 laborative analysis can improve it. Cardiff University, 2016 Page 43
  • 56. CISpaces • Intelligence Analysis External Data Sources Presentation Search and Filter Schematize Build Case Tell Story Reevaluate Search for support Search for evidence Search for information FORAGING LOOP SENSE-MAKING LOOP Structure Effort inf Shoebox Ev Ev EvEv Ev Ev Ev Ev Ev Ev Ev Evidence File Hyp1 Hyp2 Hypotheses Pirolli & Card Model Figure 6.1: The Pirolli & Card Model [PC05] Individual analysis Collaborative analysis • Scattered Information & Noise • Hard to make connections • Missing Information • Cognitive biases • Missing Expertise • More effective and reliable • Brings together different expertise, resources • Prevent biases Table 6.1: Individual vs. Collaborative Analysis Cardiff University, 2016 Page 44
  • 57. CISpaces • Intelligence Analysis Harbour Kish Farm KISH River Water pipe Aqueduct KISHSHIRE Kish Hall Hotel Illness among young and elderly people in Kishshire caused by bacteria Unidentified illness is affecting the local livestock in Kishshire, the rural area of Kish Figure 6.2: Initial information assigned to Joe PEOPLE and LIVESTOCK illness Water TEST shows a BACTERIA in the water supply Answer to POI: "GER-MAN" seen in Kish Explosion in KISH Hall Hotel TIME Tests on people/livestock POI for suspicious people Figure 6.3: Further events happening in Kish Example of Intelligence Analysis Process Goal: discover potential threats in Kish Analysts: Joe, Miles and Ella What Joe knows is summarised by Figs. 6.2 and 6.3 Main critical points and possible conclusions during the analysis:5 • Causes of water contamination → waterborne/non-waterborne bacteria; • POI responsible for water contamination; • Causes of hotel explosion. Cardiff University, 2016 Page 45
  • 58. CISpaces • Reasoning with Evidence 6.3 Reasoning with Evidence • Identify what to believe happened from the claims constructed upon information (the sensemaking process); • Derive conclusions from data aggregated from explicitly requested information (the crowdsourcing process);5 • Assess what is credible according to the history of data manipula- tion (the provenance reasoning process). 6.4 Arguments for Sensemaking Formal Linkage for Semantics Computation A CISpace graph, WAT, can be transformed into a corresponding ASPIC-10 based argumentation theory. An edge in CISpaces is represented textu- ally as →, an info/claim node is written pi and a link node is referred to as type where type = {Pro,Con}. Then, [p1,...,pn → Pro → pφ] indicates that the Pro-link has p1,..., pn as incoming nodes and an outgoing node pφ.15 Definition 51. A WAT is a tuple 〈K, AS〉 such that AS= 〈L ,¯,R〉 is con- structed as follows: • L is a propositional logic language, and a node corresponds to a proposition p ∈ L . The WAT set of propositions is Lw. • The set R is formed by rules ri ∈ R corresponding to Pro links20 between nodes such that: [p1,..., pn → Pro → pφ] is converted to ri : p1,..., pn ⇒ pφ • The contrariness function between elements is defined as: i) if [p1 → Con → p2] and [p2 → Con → p1], p1 and p2 are contradictory; ii) [p1 → Con → p2] and p1 is the only premise of the Con link, then p125 is a contrary of p2; iii) if [p1, p3 → Con → p2] then a rule is added such that p1 and p3 form an argument with conclusion ph against p2, ri : p1, p3 ⇒ ph and ph is a contrary of p2. ♠ Definition 52. K is composed of propositions pi, K = {pj, pi,...}, such that: i) let a set of rules r1,...,rn ∈ R indicate a cycle30 such that for all pi that are consequents of a rule r exists r containing pi as antecedent, then pi ∈ K if pi is an info-node; ii) otherwise, pi ∈ K if pi is not consequent of any rule r ∈ R. ♠ Cardiff University, 2016 Page 46
  • 59. CISpaces • Arguments for Sensemaking An Example of Argumentation Schemes for Intelligence Analysis Intelligence analysis broadly consists of three components: Activities (Act) including actions performed by actors, and events happening in the world; Entities (Et) including actors as individuals or groups, and objects5 such as resources; and Facts (Ft) including statements about the state of the world regarding entities and activities. A hypothesis in intelligence analysis is composed of activities and events that show how the situation has evolved. The argument from cause to ef- fect (ArgCE) forms the basis of these hypotheses. The scheme, adapted10 from [WRM08], is: Argument from cause to effect Premises: • Typically, if C (either a fact Fti or an ac- tivity Acti) occurs, then E (either a fact Fti or an activity Acti) will occur • In this case, C occurs Conclusions: In this case E will occur Critical questions: CQCE1 Is there evidence for C to occur? CQCE1 Is there a general rule for C causing E ? CQCE3 Is the relationship between C and E causal? CQCE4 Are there any exceptions to the causal rule that prevent the effect E from occur- ring? CQCE5 Has C happened before E ? CQCE6 Is there any other C that caused E ? Formally: rCE : rule(R,C ,E ),occur(C ),before(C ,E ), ruletype(R,causal),noexceptions(R) ⇒ occur(E )15 Cardiff University, 2016 Page 47
  • 60. CISpaces • Arguments for Provenance WasInformedBy Used WasGeneratedBy WasAssociatedWith ActedOnBehalfOf WasAttributedTo WasDerivedFrom Entity Actor Activity Figure 6.4: PROV Data Model [MM13] Lab Water Testing wasGeneratedBy Used wasAssociatedWith pjID:Bacteria contaminates local water Water Sample Generate Requirement Water monitoring Requirement wasDerivedFrom Used wasGeneratedBy wasInformedBy Monitoring of water supply used water contamination report Report generation Used wasGeneratedBy wasAssociatedWith wasDerivedFrom ?a1Pattern Pg Goal NGO lab assistant NGO Chemical Lab PrimarySource Time2014-11-13T08-16-45Z Time2014-11-12T10-14-40Z Time2014-11-14T05-14-10Z ?a2 ?p ?ag LEGEND p-Agent p-Entity p-Activity Node Older p-elements Newer Figure 6.5: Provenance of Joe’s information 6.5 Arguments for Provenance Provenance can be used to annotate how, where, when and by whom some information was produced [MM13]. Figure 6.4 depicts the core model for representing provenance, and Figure 6.5 shows an example of provenance for the pieces of information for analyst Joe w.r.t. the water contamination5 problem in Kish. Patterns representing relevant provenance information that may war- rant the credibility of a datum can be integrated into the analysis by ap- plying the argument scheme for provenance (ArgPV) [Ton+14]: Cardiff University, 2016 Page 48
  • 61. CISpaces • Arguments for Provenance Argument Scheme for Provenance Premises: • Given pj about activity Acti, entity Eti, or fact Fti (ppv1) • GP(pj) includes pattern Pm of p-entities Apv, p-activities Ppv, p-agents Agpv in- volved in producing pj (ppv2) • GP(pj) infers that information pj is true (ppv3) Conclusions: Acti/Eti/Fti in pj may plausibly be true (ppvcn) Critical questions: CQPV1 Is pj consistent with other information? CQPV2 Is pj supported by evidence? CQPV3 Does GP(pj) contain p-elements that lead us not to believe pj? CQPV4 Is there any other p-element that should have been included in GP(pj) to infer that pj is credible? Cardiff University, 2016 Page 49
  • 62. 7 Natural Language Interfaces 7.1 Experiments with Humans: Scenarios [CTO14] Scenario 1.B The weather forecasting service of the broadcasting com- pany AAA says that it will rain tomorrow. Meanwhile, the5 forecast service of the broadcasting company BBB says that it will be cloudy tomorrow but that it will not rain. It is also well known that the forecasting service of BBB is more accu- rate than the one of AAA. Γ1.B = 〈S1.B,D1.B〉, where:10 S1.B D1.B s1 : ⇒ sAAA s2 : ⇒ sBBB r1 : sAAA ∧ ∼ exAAA ⇒ rain r2 : sBBB ∧ ∼ exBBB ⇒ ¬ rain r3 : ∼ exaccuracy ⇒ r1 r2 Γ1.B gives rise to the following set of arguments: A1.B = {a1 = 〈s1,r1〉,a2 = 〈s2,r2〉,a3 = 〈r3〉}, where a2 A1.B-defeats a1. Therefore the set of justified arguments (which is also the unique stable extensions) is {a2,a3}. Scenario 1.E15 The weather forecasting service of the broadcasting com- pany AAA says that it will rain tomorrow. Meanwhile, the forecast service of the broadcasting company BBB says that it will be cloudy tomorrow but that it will not rain. It is also well known that the forecasting service of BBB is more accu-20 rate than the one of AAA. However, yesterday the trustwor- thy newspaper CCC published an article which said that BBB has cut the resources for its weather forecasting service in the past months, thus making it less reliable than in the past. Γ1.E = 〈S1.E,D1.E〉, where S1.E = S1.B ∪{s3 :⇒ sCCC}, and D1.E = D1.B ∪25 {r4 : sCCC ∧ ∼ exCCC ⇒ cut, r5 : cut ∧ ∼ excut ⇒ exaccuracy}. Γ1.E gives rise to the following set of arguments A1.E = A1.B ∪ {a4 = 〈s3,r4,r5〉}. a4 is the unique justified argument, while the defensible ex- tensions (which are also stable) are {a1,a4}, {a2,a4}. Cardiff University, 2016 Page 50
  • 63. Natural Language Interfaces • Experiments with Hu- mans: Scenarios [CTO14] Scenario 2.B In a TV debate, the politician AAA argues that if Region X becomes independent then X’s citizens will be poorer than now. Subsequently, financial expert Dr. BBB presents a doc- ument; which scientifically shows that Region X will not be5 worse off financially if it becomes independent. Γ2.B = 〈S2.B,D2.B〉, where: S2.B D2.B s1 : ⇒ sAAA s2 : ⇒ sBBB s3 : ⇒ sdoc r1 : sAAA ∧ ∼ exAAA ⇒ poorer r2 : sBBB ∧ sdoc ∧ ∼ exBBB ∧ ∼ exdoc ⇒ ¬ poorer r3 : ∼ exexpert ⇒ r1 r2 Γ2.B gives rise to the following set of arguments A2.B = {a1 = 〈s1,r1〉,a2 = 〈s2,s3,r2〉,a3 = 〈r3〉}, where a2 A2.B-defeats a1. Therefore the set of justi-10 fied arguments is {a2,a3}. Scenario 2.E In a TV debate, the politician AAA argues that if Region X becomes independent then X’s citizens will be poorer than now. Subsequently, financial expert Dr. BBB presents a doc-15 ument; which scientifically shows that Region X will not be worse off financially if it becomes independent. After that, the moderator of the debate reminds BBB of more recent research by several important economists that disputes the claims in that document.20 Γ2.E = 〈S2.E,D2.E〉, where S2.E = S2.B ∪{s4 :⇒ sresearch, s5 : sresearch ⇒ ¬sdoc}, and D2.E = D2.B. Γ2.E gives rise to the following set of arguments A2.E = A2.B ∪ {a4 = 〈s4,s5〉}. Therefore, there are two stable extensions which are also the defensible extensions: {a1,a3,a4} and {a2,a3}.25 Scenario 3.B You are planning to buy a second-hand car, and you go to a dealership with BBB, a mechanic whom has been recom- mended you by a friend. The salesperson AAA shows you a car and says that it needs very little work done to it. BBB30 says it will require quite a lot of work, because in the past he had to fix several issues in a car of the same model. Cardiff University, 2016 Page 51
  • 64. Natural Language Interfaces • Experiments with Hu- mans: Scenarios [CTO14] Γ3.B = 〈S3.B,D3.B〉, where: S3.B D3.B s1 : ⇒ sAAA s2 : ⇒ sBBB r1 : sAAA ∧ ∼ exAAA ⇒ ¬ work r2 : sBBB ∧ ∼ exBBB ⇒ work r3 : ∼ exprof essional ⇒ r1 r2 Γ3.B gives rise to the following set of arguments A3.B = {a1 = 〈s1,r1〉,a2 = 〈s2,s3,r2〉,a3 = 〈r3〉}, where a2 A3.B-defeats a1. Therefore the set of justi- fied arguments (which is also the unique stable extensions) is {a2,a3}.5 Scenario 3.E You are planning to buy a second-hand car, and you go to a dealership with BBB, a mechanic whom has been recom- mended you by a friend. The salesperson AAA shows you a car and says that it needs very little work done to it. BBB10 says it will require quite a lot of work, because in the past he had to fix several issues in a car of the same model. While you are at the dealership, your friend calls you to tell you that he knows (beyond a shadow of a doubt) that BBB made unneces- sary repairs to his car last month.15 Γ3.E = 〈S3.E,D3.E〉, where S3.E = S3.B ∪ {s3 :⇒ sf riend}, and D3.E = D4.B ∪{r4 : sf riend ∧ ∼ exf riend ⇒ unnecc_work, r5 : unnec_work ∧ ∼ exunnec_work ⇒ exprof essional}. Γ3.E gives rise to the following set of arguments A3.E = A2.E ∪ {a4 = 〈s3,r4,r5〉}. Similarly to Scenario 1.E, a4 is the only justified argument20 and there are two stable extensions: {a1,a4}, and {a2,a4}. Scenario 4.B After several dates, you would like to start a serious rela- tionship with J but you turn to ask two close friends of yours, AAA and BBB, for advice. You have known BBB for longer25 than you have known AAA. AAA tells you that J is lovely and you should go ahead, while BBB suggests that you should be very cautious because J might have a hidden agenda. Γ4.B = 〈S4.B,D4.B〉, where S4.B D4.E s1 : ⇒ sAAA s2 : ⇒ sBBB r1 : sAAA ∧ ∼ exAAA ⇒ go r2 : sBBB ∧ ∼ exBBB ⇒ ¬ go r3 : ∼ exbest_f riend ⇒ r1 r2 30 Cardiff University, 2016 Page 52
  • 65. Natural Language Interfaces • Experiments with Hu- mans: Scenarios [CTO14] Γ4.B gives rise to the following set of arguments A4.B = {a1 = 〈s1,r1〉,a2 = 〈s2,s3,r2〉,a3 = 〈r3〉}, where a2 A4.B-defeats a1. Therefore the set of justi- fied arguments (which is also the unique stable extensions) is {a2,a3}. Scenario 4.E After several dates, you would like to start a serious rela-5 tionship with J. but you turn to ask two friends of yours, AAA and BBB, for advice. You have known BBB for longer than you have known AAA. AAA tells you that J is lovely and you should go ahead, while BBB suggests that you should be very cautious because J might have a hidden agenda. After some10 weeks, CCC, who is also a close friend of BBB, tells you that BBB has been into you for years; BBB is too shy to tell you about their feelings about you, but are still possessive of you. Γ4.E = 〈S4.E,D4.E〉, where S4.E = S4.B ∪{s3 :⇒ sCCC}, and D4.E = D4.B ∪ {r4 : sCCC ∧ ∼ exCCC ⇒ possessive, r5 : possessive ∧ ∼ expossessive ⇒15 ¬ r1 r2}. Γ4.E gives rise to the following set of arguments A4.E = A4.B ∪ {a4 = 〈s3,r4,r5〉}, with no justified arguments. The stable extensions are: {a1,a4},{a2,a3},{a2,a4}. Results 0 15 30 45 60 PA PB PU % Distribution of acceptability of actors’ positions Base cases Extended cases Figure 7.1: Distribution of the final conclusion PA/PB/PU, comparing base cases with extended cases, in percent. Cardiff University, 2016 Page 53
  • 66. Natural Language Interfaces • Experiments with Hu- mans: Scenarios [CTO14] Base Cases Extended Cases PA PB PU PA PB PU 1, weather 5.0 50.0 45.0 15.8 21.1 63.2 2, politics 5.3 63.2 31.6 21.1 10.5 68.4 3, buying car 0.0 68.2 31.8 23.8 23.8 52.4 4, romance 12.5 68.8 18.8 48.0 36.0 16.0 Table 7.1: Distribution of the final conclusion PA/PB/PU in percent, for each scenarios. Shading denotes the most likely conclusions. 0 15 30 45 60 U1 U2 U3 % Distributions of motivations for PU (scenarios 1.B and 3.B) 1.B 3.B Figure 7.2: Distribution across three categories of justification (U1: lack of information, U2: domain specific reasons; U3: other) for agreement with the PU position in scenarios 1.B and 3.B. Cardiff University, 2016 Page 54
  • 67. Natural Language Interfaces • Lessons From Argu- ment Mining: [BR11] Base cases Extended cases RB † Md∗ B RE † Md∗ E C.D.‡ Relevance 1, weather 110.38 6.00 82.92 4.00 46.60 2, politics 107.45 6.00 69.45 4.00 47.19 3, buying car 118.05 6.50 67.45 4.00 44.38 4, romance 48.34 2.00 44.40 2.00 46.57 Agreement 1, weather 116.38 6.00 87.18 4.00 46.60 2, politics 103.34 6.00 65.05 4.00 47.19 3, buying car 121.93 6.50 64.33 4.00 44.38 4, romance 44.94 2.00 44.20 2.00 46.57 (a) Scenario 3.B Scenario 4.B R3.B † Md∗ 3.B R4.B † Md∗ 4.B C.D.‡ Relevance 118.05 6.50 48.34 2.00 47.79 Agreement 121.93 6.50 44.94 2.00 47.79 (b) Table 7.2: Post-hoc analysis regarding relevance and agreement: pairwise comparison base-extended cases (a); and between 1.B and 4.B (b). Sta- tistically significant cases (i.e. when |Rx − Ry| > C.D) are highlighted in grey. † Mean rank as computed with the Kruskal-Wallis test ∗ Median ‡ Critical Difference, as computed in [SC88] cited by [Fie09] with α = 0.05. 7.2 Lessons From Argument Mining: [BR11] Bob says: Lower taxes stimulate the economy Bob says: The government will inevitably lower the tax rate. Wilma says: Why? Challenging Substantiating Asserting Asserting Challenging Lower taxes stimulate the economy An application of the argument scheme for Argument from Positive Consequences The government will inevitably lower the tax rate. Arguing Bob is credible Bob is credible Cardiff University, 2016 Page 55
  • 68. Bibliography [AB13] Leila Amgoud and Jonathan Ben-Naim. “Ranking-Based Se- mantics for Argumentation Frameworks”. In: Scalable Un- certainty Management: 7th International Conference, SUM 2013, Washington, DC, USA, September 16-18, 2013. Proceed-5 ings. Ed. by Weiru Liu, V. S. Subrahmanian, and Jef Wi- jsen. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013, pp. 134–147. [Bar+14] Pietro Baroni et al. “On the Input/Output behavior of argu- mentation frameworks”. In: Artificial Intelligence 217 (2014),10 pp. 144–197. URL: http://www.sciencedirect.com/science/ article/pii/S0004370214001131. [BCG11] P Baroni, M Caminada, and M Giacomin. “An introduction to argumentation semantics”. In: Knowledge Engineering Re- view 26.4 (2011), pp. 365–410.15 [Bex+13] Floris Bex et al. “Implementing the argument web”. In: Com- munications of the ACM 56.10 (Oct. 2013), p. 66. [BG07] Pietro Baroni and Massimiliano Giacomin. “On principle-based evaluation of extension-based argumentation semantics”. In: Artificial Intelligence (Special issue on Argumentation in A.I.)20 171.10/15 (2007), pp. 675–700. [BG09a] Pietro Baroni and Massimiliano Giacomin. “Semantics of Ab- stract Argument Systems”. In: Argumentation in Artificial Intelligence. Ed. by Guillermo Simari and Iyad Rahwan. Springer US, 2009, pp. 25–44.25 [BG09b] Pietro Baroni and Massimiliano Giacomin. “Skepticism rela- tions for comparing argumentation semantics”. In: Interna- tional Journal of Approximate Reasoning 50.6 (June 2009), pp. 854–866. ISSN: 0888-613X. DOI: 10.1016/j.ijar.2009. 02.006. URL: http://linkinghub.elsevier.com/retrieve/30 pii / S0888613X09000383 % 20http : / / dx . doi . org / 10 . 1016/j.ijar.2009.02.006%20http://dl.acm.org/ citation.cfm?id=1542547.1542704. [BGG05] Pietro Baroni, Massimiliano Giacomin, and Giovanni Guida. “SCC-recursiveness: a general schema for argumentation se-35 mantics”. In: Artificial Intelligence 168.1-2 (2005), pp. 165– 210. Cardiff University, 2016 Page 56
  • 69. BIBLIOGRAPHY • BIBLIOGRAPHY [BH01] Philippe Besnard and Anthony Hunter. “A logic-based the- ory of deductive arguments”. In: Artificial Intelligence 128 (2001), pp. 203–235. ISSN: 00043702. DOI: 10.1016/S0004- 3702(01)00071-6. URL: http://www.sciencedirect.com/ science/article/pii/S0004370201000716.5 [BJT16] Bart Bogaerts, Tomi Janhunen, and Shahab Tasharrofi. “Declar- ative Solver Development: Case Studies”. In: Principles of Knowledge Representation and Reasoning: Proceedings of the Fifteenth International Conference, KR 2016, Cape Town, South Africa, April 25-29, 2016. 2016, pp. 74–83. URL: http://www.10 aaai.org/ocs/index.php/KR/KR16/paper/view/12822. [Bon+16] Elise Bonzon et al. “A Comparative Study of Ranking-based Semantics for Abstract Argumentation”. In: Proceedings of the 30th AAAI Conference on Artificial Intelligence (AAAI’16) 1 (2016). arXiv: 1602.01059.15 [BR11] Katarzyna Budzynska and Chris Reed. Whence inference? Tech. rep. University of Dundee, 2011. [BS12] Stefano Bistarelli and Francesco Santini. “Modeling and Solv- ing AFs with a Constraint-Based Tool: ConArg”. In: Theory and Applications of Formal Argumentation. Vol. 7132. Springer,20 2012, pp. 99–116. ISBN: 978-3-642-29183-8. [Cam06] Martin Caminada. “On the Issue of Reinstatement in Ar- gumentation”. In: Proceedings of the 10th European Confer- ence on Logics in Artificial Intelligence (JELIA 2006). 2006, pp. 111–123. ISBN: 3-540-39625-X.25 [Cer+14a] Federico Cerutti et al. “A SCC Recursive Meta-Algorithm for Computing Preferred Labellings in Abstract Argumentation”. In: 14th International Conference on Principles of Knowledge Representation and Reasoning. Ed. by Chitta Baral and Giuseppe De Giacomo. 2014, pp. 42–51. URL: http://www.aaai.org/30 ocs/index.php/KR/KR14/paper/view/7974. [Cer+14b] Federico Cerutti et al. “Computing Preferred Extensions in Abstract Argumentation: A SAT-Based Approach”. In: TAFA 2013. Ed. by Elizabeth Black, Sanjay Modgil, and Nir Oren. Vol. 8306. Lecture Notes in Computer Science. Springer-Verlag35 Berlin Heidelberg, 2014, pp. 176–193. URL: http://link. springer.com/chapter/10.1007/978- 3- 642- 54373- 9_12. Cardiff University, 2016 Page 57
  • 70. BIBLIOGRAPHY • BIBLIOGRAPHY [Cer+15] Federico Cerutti et al. “Exploiting Parallelism for Hard Prob- lems in Abstract Argumentation”. In: 29th AAAI Conference - AAAI 2015. 2015, pp. 1475–1481. URL: http://www.aaai. org / ocs / index . php / AAAI / AAAI15 / paper / viewFile / 9451/9421.5 [CGV14] Federico Cerutti, Massimiliano Giacomin, and Mauro Vallati. “Algorithm Selection for Preferred Extensions Enumeration”. In: 5th Conference on Computational Models of Argument. Ed. by Simon Parsons et al. 2014, pp. 221–232. URL: http: //ebooks.iospress.nl/volumearticle/37791.10 [Cha+15] Günther Charwat et al. “Methods for solving reasoning prob- lems in abstract argumentation — A survey”. In: Artificial Intelligence 220 (Mar. 2015), pp. 28–63. ISSN: 00043702. DOI: 10.1016/j.artint.2014.11.008. URL: http://www. sciencedirect.com/science/article/pii/S0004370214001404.15 [Che+06] Carlos Iván Chesnevar et al. “Towards an argument inter- change format”. English. In: The Knowledge Engineering Re- view 21.04 (Dec. 2006), p. 293. ISSN: 0269-8889. DOI: 10 . 1017/S0269888906001044. URL: http://journals.cambridge. org/abstract_S0269888906001044.20 [CL05] Claudette Cayrol and Marie-Christine Lagasquie-Schiex. “Grad- uality in argumentation”. In: Journal of Artificial Intelligence Research 23.1 (2005), pp. 245–297. [CTO14] Federico Cerutti, Nava Tintarev, and Nir Oren. “Formal Ar- guments, Preferences, and Natural Language Interfaces to25 Humans: an Empirical Evaluation”. In: 21st European Con- ference on Artificial Intelligence. 2014, pp. 207–212. URL: http: //ebooks.iospress.nl/volumearticle/36941. [DCD11] DCDC. Understanding and Intelligence Support to Joint Op- erations. Tech. rep. 2011.30 [DS14] Jeremie Dauphin and Claudia Schulz. “Arg Teach - A Learn- ing Tool for Argumentation Theory”. In: 2014 IEEE 26th In- ternational Conference on Tools with Artificial Intelligence. IEEE, 2014, pp. 776–783. [Dun+14] Paul E. Dunne et al. “Characteristics of Multiple Viewpoints35 in Abstract Argumentation”. In: Proceedings of the 14th Con- ference on Principles of Knowledge Representation and Rea- soning. 2014, pp. 72–81. Cardiff University, 2016 Page 58
  • 71. BIBLIOGRAPHY • BIBLIOGRAPHY [Dun95] Phan Minh Dung. “On the Acceptability of Arguments and Its Fundamental Role in Nonmonotonic Reasoning, Logic Pro- gramming, and n-Person Games”. In: Artificial Intelligence 77.2 (1995), pp. 321–357. [Dvo+11] Wolfgang Dvoˇrák et al. “Making Use of Advances in Answer-5 Set Programming for Abstract Argumentation Systems”. In: Proceedings of the 19th International Conference on Applica- tions of Declarative Programming and Knowledge Manage- ment (INAP 2011). 2011. [DW09] Paul E. Dunne and Michael Wooldridge. “Complexity of ab-10 stract argumentation”. In: Argumentation in AI. Ed. by I Rah- wan and G Simari. Springer-Verlag, 2009. Chap. 5, pp. 85– 104. [EGW10] Uwe Egly, Sarah Alice Gaggl, and Stefan Woltran. “Answer- set programming encodings for argumentation frameworks”.15 In: Argument & Computation 1.2 (June 2010), pp. 147–177. ISSN: 1946-2166. DOI: 10.1080/19462166.2010.486479. URL: http : // dx . doi . org/ 10 . 1080 / 19462166 . 2010 . 486479. [Fab13] Wolfgang Faber. “Answer Set Programming”. In: Reasoning20 Web. Semantic Technologies for Intelligent Data Access. Vol. 8067. Lecture Notes in Computer Science. Springer Berlin Heidel- berg, 2013, pp. 162–193. [Fie09] Andy Field. Discovering Statistics Using SPSS (Introducing Statistical Methods series). SAGE Publications Ltd, 2009. ISBN:25 1847879071. [GLW16] Massimiliano Giacomin, Thomas Linsbichler, and Stefan Woltran. “On the Functional Completeness of Argumentation Seman- tics”. In: Knowledge Representation and Reasoning Confer- ence (KR). 2016.30 [HMP01] D Hitchcock, P McBurney, and P Parsons. “A Framework for Deliberation Dialogues, Argument and Its Applications”. In: Proceedings of the Fourth Biennial Conference of the Ontario Society for the Study of Argumentation (OSSA 2001). Ed. by H V Hansen et al. 2001.35 [MM13] L Moreau and P Missier. PROV-DM: The PROV Data Model. Available at http://www.w3.org/TR/prov-dm/. Apr. 2013. [MP02] Peter McBurney and Simon Parsons. “Games that agents play: A formal framework for dialogues between autonomous agents”. In: Journal of Logic, Language and Information 11.3 (2002),40 Cardiff University, 2016 Page 59
  • 72. BIBLIOGRAPHY • BIBLIOGRAPHY pp. 315–334. URL: http://www.springerlink.com/index/ N809NP4PPR3HFTDV.pdf. [MT08] Paul-Amaury Matt and Francesca Toni. “A Game-Theoretic Measure of Argument Strength for Abstract Argumentation”. In: 11th European Conference on Logics in Artifcial Intelli-5 gence (JELIA’08). 2008, pp. 285–297. [NAD14] Samer Nofal, Katie Atkinson, and Paul E. Dunne. “Algorithms for decision problems in argument systems under preferred semantics”. In: Artificial Intelligence 207 (2014), pp. 23–51. URL: http://www.sciencedirect.com/science/article/10 pii/S0004370213001161. [NDA12] S Nofal, P E Dunne, and K Atkinson. “On Preferred Exten- sion Enumeration in Abstract Argumentation”. In: Proceed- ings of 3rd International Conference on Computational Mod- els of Arguments (COMMA 2012). 2012, pp. 205–216.15 [PC05] P. Pirolli and S. Card. “The sensemaking process and lever- age points for analyst technology as identified through cogni- tive task analysis”. In: Proceedings of the International Con- ference on Intelligence Analysis. 2005. [Pra10] Henry Prakken. “An abstract framework for argumentation20 with structured arguments”. In: Argument & Computation 1.2 (June 2010), pp. 93–124. ISSN: 1946-2166. DOI: 10.1080/ 19462160903564592. URL: http://www.tandfonline.com/ doi/abs/10.1080/19462160903564592. [Pre+14] Alun Preece et al. “Human-machine conversations to sup-25 port multi-agency missions”. In: ACM SIGMOBILE Mobile Computing and Communications Review 18.1 (2014), pp. 75– 84. ISSN: 15591662. DOI: 10.1145/2581555.2581568. URL: http://dl.acm.org/citation.cfm?id=2581555.2581568. [PV02] Henry Prakken and Gerard Vreeswijk. “Logics for Defeasi-30 ble Argumentation”. In: Handbook of philosophical logic 4 (2002), pp. 218–319. ISSN: 0955792X. DOI: 10.1007/978- 94- 017- 0456- 4_3. URL: http://link.springer.com/ chapter/10.1007/978-94-017-0456-4_3. [Rah+11] Iyad Rahwan et al. “Representing and classifying arguments35 on the Semantic Web”. English. In: The Knowledge Engineer- ing Review 26.04 (Nov. 2011), pp. 487–511. ISSN: 0269-8889. DOI: 10.1017/S0269888911000191. URL: http://journals. cambridge.org/abstract_S0269888911000191. Cardiff University, 2016 Page 60
  • 73. BIBLIOGRAPHY • BIBLIOGRAPHY [RBW08] Francesca Rossi, Peter van Beek, and Toby Walsh. “Chap- ter 4 Constraint Programming”. In: Handbook of Knowledge Representation. Ed. by Vladimir Lifschitz van Harmelen and Bruce Porter. Vol. 3. Foundations of Artificial Intelligence. Elsevier, 2008, pp. 181–211. DOI: http : / / dx . doi . org /5 10.1016/S1574- 6526(07)03004- 0. URL: http://www. sciencedirect.com/science/article/pii/S1574652607030040. [SC88] Sidney Siegel and N. John Castellan Jr. Nonparametric Statis- tics for The Behavioral Sciences. McGraw-Hill Humanities/Social Sciences/Languages, 1988. ISBN: 0070573573.10 [Ton+14] Alice Toniolo et al. “Making Informed Decisions with Prove- nance and Argumentation Schemes”. In: Eleventh Interna- tional Workshop on Argumentation in Multi-Agent Systems (ArgMAS 2014). 2014. URL: http://www.inf.pucrs.br/ felipe.meneguzzi/download/AAMAS_14/workshops/AAMAS2014-15 W12/w12-11.pdf. [Ton+15] Alice Toniolo et al. “Agent Support to Reasoning with Dif- ferent Types of Evidence in Intelligence Analysis”. In: Pro- ceedings of the 14th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2015). 2015, pp. 781–20 789. URL: http://aamas2015.com/en/AAMAS_2015_USB/ aamas/p781.pdf. [Tou58] S Toulmin. The Uses of Argument. Cambridge University Press, Cambridge, UK, 1958. [VCG14] Mauro Vallati, Federico Cerutti, and Massimiliano Giacomin.25 “Argumentation Frameworks Features: an Initial Study”. In: 21st European Conference on Artificial Intelligence. Ed. by T. Shaub, G. Friedrich, and B. O’Sullivan. 2014, pp. 1117– 1118. URL: http://ebooks.iospress.nl/volumearticle/ 37148.30 [Wal06] Douglas N Walton. “How to make and defend a proposal in a deliberation dialogue”. In: Artif. Intell. Law 14.3 (Sept. 2006), pp. 177–239. ISSN: 0924-8463. DOI: 10.1007/s10506-006- 9025-x. URL: http://portal.acm.org/citation.cfm?id= 1238120.1238122.35 [Wal14] Douglas N. Walton. Burned of Proof, Presumption and Argu- mentation. Cambridge University Press, 2014. [Wal97] Douglas N Walton. Appeal to Expert Opinion. University Park: Pennsylvania State University, 1997. [WRM08] Douglas N. Walton, Chris Reed, and Fabrizio Macagno. Argu-40 mentation Schemes. Cambridge University Press, NY, 2008. Cardiff University, 2016 Page 61