SlideShare una empresa de Scribd logo
1 de 172
Descargar para leer sin conexión
ABC methodology and applications
Jean-Michel Marin & Christian P. Robert
I3M, Universit´e Montpellier 2, Universit´e Paris-Dauphine, & University of Warwick
ISBA 2014, Cancun. July 13
Outline
1 Approximate Bayesian computation
2 ABC as an inference machine
3 ABC for model choice
4 Genetics of ABC
Approximate Bayesian computation
1 Approximate Bayesian computation
ABC basics
Alphabet soup
ABC-MCMC
2 ABC as an inference machine
3 ABC for model choice
4 Genetics of ABC
Untractable likelihoods
Cases when the likelihood function
f (y|θ) is unavailable and when the
completion step
f (y|θ) =
Z
f (y, z|θ) dz
is impossible or too costly because of
the dimension of z
c MCMC cannot be implemented!
Untractable likelihoods
Cases when the likelihood function
f (y|θ) is unavailable and when the
completion step
f (y|θ) =
Z
f (y, z|θ) dz
is impossible or too costly because of
the dimension of z
c MCMC cannot be implemented!
The ABC method
Bayesian setting: target is π(θ)f (y|θ)
The ABC method
Bayesian setting: target is π(θ)f (y|θ)
When likelihood f (y|θ) not in closed form, likelihood-free rejection
technique:
The ABC method
Bayesian setting: target is π(θ)f (y|θ)
When likelihood f (y|θ) not in closed form, likelihood-free rejection
technique:
ABC algorithm
For an observation y ∼ f (y|θ), under the prior π(θ), keep jointly
simulating
θ ∼ π(θ) , z ∼ f (z|θ ) ,
until the auxiliary variable z is equal to the observed value, z = y.
[Tavar´e et al., 1997]
Why does it work?!
The proof is trivial:
f (θi ) ∝
z∈D
π(θi )f (z|θi )Iy(z)
∝ π(θi )f (y|θi )
= π(θi |y) .
[Accept–Reject 101]
Earlier occurrence
‘Bayesian statistics and Monte Carlo methods are ideally
suited to the task of passing many models over one
dataset’
[Don Rubin, Annals of Statistics, 1984]
Note Rubin (1984) does not promote this algorithm for
likelihood-free simulation but frequentist intuition on posterior
distributions: parameters from posteriors are more likely to be
those that could have generated the data.
A as A...pproximative
When y is a continuous random variable, equality z = y is replaced
with a tolerance condition,
(y, z) ≤
where is a distance
A as A...pproximative
When y is a continuous random variable, equality z = y is replaced
with a tolerance condition,
(y, z) ≤
where is a distance
Output distributed from
π(θ) Pθ{ (y, z) < } ∝ π(θ| (y, z) < )
[Pritchard et al., 1999]
ABC algorithm
Algorithm 1 Likelihood-free rejection sampler 2
for i = 1 to N do
repeat
generate θ from the prior distribution π(·)
generate z from the likelihood f (·|θ )
until ρ{η(z), η(y)} ≤
set θi = θ
end for
where η(y) defines a (not necessarily sufficient) [summary] statistic
Output
The likelihood-free algorithm samples from the marginal in z of:
π (θ, z|y) =
π(θ)f (z|θ)IA ,y (z)
A ,y×Θ π(θ)f (z|θ)dzdθ
,
where A ,y = {z ∈ D|ρ(η(z), η(y)) < }.
Output
The likelihood-free algorithm samples from the marginal in z of:
π (θ, z|y) =
π(θ)f (z|θ)IA ,y (z)
A ,y×Θ π(θ)f (z|θ)dzdθ
,
where A ,y = {z ∈ D|ρ(η(z), η(y)) < }.
The idea behind ABC is that the summary statistics coupled with a
small tolerance should provide a good approximation of the
posterior distribution:
π (θ|y) = π (θ, z|y)dz ≈ π(θ|y) .
A toy example
Case of
y|θ ∼ N1 2(θ + 2)θ(θ − 2), 0.1 + θ2
and
θ ∼ U[−10,10]
when y = 2 and ρ(y, z) = |y − z|
[Richard Wilkinson, Tutorial at NIPS 2013]
A toy example
✏ = 7.5
●
●
●
●
●
●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
● ●
●
●
● ●
●
●
●
●
●
●
●
● ●
●
● ●●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
● ●
●
●
●
●●
●
●●●
●
●
●●
●
●
●
●●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
−3 −2 −1 0 1 2 3
−1001020
theta vs D
theta
D
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●●● ●
●●
●
●
●
●
●
●
● ● ●●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●● ●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●● ●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
−ε
+ε
D
−3 −2 −1 0 1 2 3
0.00.20.40.60.81.01.21.4
Density
theta
Density
ABC
True
= 7.5
A toy example
✏ = 5
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●
●
● ●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
● ●
●
●
●
●
●
●
● ●
● ●
●
●
●
● ●
●
●
●
●●
●
●
●
●
●
●
●
● ●●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
● ●
●
●
●
●
●
●●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
● ●
●
●
●
●
●
●●
●
●
●●
●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
−3 −2 −1 0 1 2 3
−1001020
theta vs D
theta
D
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●●● ●
●●
●
● ●
●
● ● ●●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●● ●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●●
●
●● ●
●●
●
●
●
●
●
●
● ●
●
●●
●●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
● ●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
● ●
●
●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
● ●
●
●
●
●
● ●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
−ε
+ε
D
−3 −2 −1 0 1 2 3
0.00.20.40.60.81.01.21.4
Density
theta
Density
ABC
True
= 5
A toy example
✏ = 2.5
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●
●
● ●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
● ●
● ●
●
●
●
●●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●●
●
●
●
●●
●●
●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
−3 −2 −1 0 1 2 3
−1001020
theta vs D
theta
D
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●
● ●
●
●
●
●
●
●
●●
●●
●●
●
●
●●
● ●
●
●
●
●
●
●
● ●
●
●
● ●
●
●
●● ●
●
●●
●
●●
●
●
●
●
●
●●
●●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●● ●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●●
●
● ●
●
●
●
●
●●
●
●●
●●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●
●
●
●
●
● ●
●● ●
●
●
●●
●
●●
●
●
● ●
●●
●
●
●
●●
● ●
●
●
●
●
●
●
●●
●
●
●
●
● ●
●
●
●
●
●
−ε
+ε
D
−3 −2 −1 0 1 2 3
0.00.20.40.60.81.01.21.4
Density
theta
Density
ABC
True
= 2.5
A toy example
✏ = 1
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ● ●●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
● ●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●●
●
●
●
●●
●●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
−3 −2 −1 0 1 2 3
−1001020
theta vs D
theta
D
●●
● ●●●
● ●●
●●●
●● ●
●●
●
●● ●●● ●
●
●
●●
●●
●
●●●
●
●●
●
●●
●●●
●
●●
●
●●
●
● ●●
●●
● ●
●●●
●
●
●●
●
●
●●●●
●
●
●
●
●
●
●
●
● ●● ●
● ●
● ●●●
●
● ●
● ● ●
●
●●
●●
−ε
+ε
−3 −2 −1 0 1 2 3
0.00.20.40.60.81.01.21.4
Density
theta
Density
ABC
True
= 1
ABC as knn
[Biau et al., 2013, Annales de l’IHP]
Practice of ABC: determine tolerance as a quantile on observed
distances, say 10% or 1% quantile,
= N = qα(d1, . . . , dN)
ABC as knn
[Biau et al., 2013, Annales de l’IHP]
Practice of ABC: determine tolerance as a quantile on observed
distances, say 10% or 1% quantile,
= N = qα(d1, . . . , dN)
• Interpretation of ε as nonparametric bandwidth only
approximation of the actual practice
[Blum & Fran¸cois, 2010]
ABC as knn
[Biau et al., 2013, Annales de l’IHP]
Practice of ABC: determine tolerance as a quantile on observed
distances, say 10% or 1% quantile,
= N = qα(d1, . . . , dN)
• Interpretation of ε as nonparametric bandwidth only
approximation of the actual practice
[Blum & Fran¸cois, 2010]
• ABC is a k-nearest neighbour (knn) method with kN = N N
[Loftsgaarden & Quesenberry, 1965]
ABC consistency
Provided
kN/ log log N −→ ∞ and kN/N −→ 0
as N → ∞, for almost all s0 (with respect to the distribution of
S), with probability 1,
1
kN
kN
j=1
ϕ(θj ) −→ E[ϕ(θj )|S = s0]
[Devroye, 1982]
ABC consistency
Provided
kN/ log log N −→ ∞ and kN/N −→ 0
as N → ∞, for almost all s0 (with respect to the distribution of
S), with probability 1,
1
kN
kN
j=1
ϕ(θj ) −→ E[ϕ(θj )|S = s0]
[Devroye, 1982]
Biau et al. (2013) also recall pointwise and integrated mean square error
consistency results on the corresponding kernel estimate of the
conditional posterior distribution, under constraints
kN → ∞, kN /N → 0, hN → 0 and hp
N kN → ∞,
Rates of convergence
Further assumptions (on target and kernel) allow for precise
(integrated mean square) convergence rates (as a power of the
sample size N), derived from classical k-nearest neighbour
regression, like
• when m = 1, 2, 3, kN ≈ N(p+4)/(p+8) and rate N
− 4
p+8
• when m = 4, kN ≈ N(p+4)/(p+8) and rate N
− 4
p+8 log N
• when m > 4, kN ≈ N(p+4)/(m+p+4) and rate N
− 4
m+p+4
[Biau et al., 2013]
where p dimension of θ and m dimension of summary statistics
Rates of convergence
Further assumptions (on target and kernel) allow for precise
(integrated mean square) convergence rates (as a power of the
sample size N), derived from classical k-nearest neighbour
regression, like
• when m = 1, 2, 3, kN ≈ N(p+4)/(p+8) and rate N
− 4
p+8
• when m = 4, kN ≈ N(p+4)/(p+8) and rate N
− 4
p+8 log N
• when m > 4, kN ≈ N(p+4)/(m+p+4) and rate N
− 4
m+p+4
[Biau et al., 2013]
where p dimension of θ and m dimension of summary statistics
Drag: Only applies to sufficient summary statistics
Probit modelling on Pima Indian women
Example (R benchmark)
200 Pima Indian women with observed variables
• plasma glucose concentration in oral glucose tolerance test
• diastolic blood pressure
• diabetes pedigree function
• presence/absence of diabetes
Probit modelling on Pima Indian women
Example (R benchmark)
200 Pima Indian women with observed variables
• plasma glucose concentration in oral glucose tolerance test
• diastolic blood pressure
• diabetes pedigree function
• presence/absence of diabetes
Probability of diabetes function of above variables
P(y = 1|x) = Φ(x1β1 + x2β2 + x3β3) ,
for 200 observations based on a g-prior modelling:
β ∼ N3(0, n XT
X)−1
Probit modelling on Pima Indian women
Example (R benchmark)
200 Pima Indian women with observed variables
• plasma glucose concentration in oral glucose tolerance test
• diastolic blood pressure
• diabetes pedigree function
• presence/absence of diabetes
Probability of diabetes function of above variables
P(y = 1|x) = Φ(x1β1 + x2β2 + x3β3) ,
for 200 observations based on a g-prior modelling:
β ∼ N3(0, n XT
X)−1
Use of MLE estimates as summary statistics and of distance
ρ(y, z) = ||ˆβ(z) − ˆβ(y)||2
Pima Indian benchmark
−0.005 0.010 0.020 0.030
020406080100
Density
−0.05 −0.03 −0.01
020406080
Density
−1.0 0.0 1.0 2.0
0.00.20.40.60.81.0
Density
Figure: Comparison between density estimates of the marginals on β1
(left), β2 (center) and β3 (right) from ABC rejection samples (red) and
MCMC samples (black)
.
MA example
Case of the MA(2) model
xt = t +
2
i=1
ϑi t−i
Simple prior: uniform prior over the identifiability zone, e.g.
triangle for MA(2)
MA example (2)
ABC algorithm thus made of
1 picking a new value (ϑ1, ϑ2) in the triangle
2 generating an iid sequence ( t)−2<t≤T
3 producing a simulated series (xt)1≤t≤T
MA example (2)
ABC algorithm thus made of
1 picking a new value (ϑ1, ϑ2) in the triangle
2 generating an iid sequence ( t)−2<t≤T
3 producing a simulated series (xt)1≤t≤T
Distance: choice between basic distance between the series
ρ((xt)1≤t≤T , (xt)1≤t≤T ) =
T
t=1
(xt − xt)2
or distance between summary statistics like the 2 autocorrelations
τj =
T
t=j+1
xtxt−j
Comparison of distance impact
Evaluation of the tolerance on the ABC sample against both
distances ( = 10%, 1%, 0.1% quantiles of simulated distances) for
an MA(2) model
Comparison of distance impact
0.0 0.2 0.4 0.6 0.8
01234
θ1
−2.0 −1.0 0.0 0.5 1.0 1.5
0.00.51.01.5
θ2
Evaluation of the tolerance on the ABC sample against both
distances ( = 10%, 1%, 0.1% quantiles of simulated distances) for
an MA(2) model
Comparison of distance impact
0.0 0.2 0.4 0.6 0.8
01234
θ1
−2.0 −1.0 0.0 0.5 1.0 1.5
0.00.51.01.5
θ2
Evaluation of the tolerance on the ABC sample against both
distances ( = 10%, 1%, 0.1% quantiles of simulated distances) for
an MA(2) model
Homonomy
The ABC algorithm is not to be confused with the ABC algorithm
The Artificial Bee Colony algorithm is a swarm based meta-heuristic
algorithm that was introduced by Karaboga in 2005 for optimizing
numerical problems. It was inspired by the intelligent foraging
behavior of honey bees. The algorithm is specifically based on the
model proposed by Tereshko and Loengarov (2005) for the foraging
behaviour of honey bee colonies. The model consists of three
essential components: employed and unemployed foraging bees, and
food sources. The first two components, employed and unemployed
foraging bees, search for rich food sources (...) close to their hive.
The model also defines two leading modes of behaviour (...):
recruitment of foragers to rich food sources resulting in positive
feedback and abandonment of poor sources by foragers causing
negative feedback.
[Karaboga, Scholarpedia]
ABC advances
Simulating from the prior is often poor in efficiency:
ABC advances
Simulating from the prior is often poor in efficiency:
Either modify the proposal distribution on θ to increase the density
of z’s within the vicinity of y...
[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]
ABC advances
Simulating from the prior is often poor in efficiency:
Either modify the proposal distribution on θ to increase the density
of z’s within the vicinity of y...
[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]
...or by viewing the problem as a conditional density estimation
and by developing techniques to allow for larger
[Beaumont et al., 2002]
ABC advances
Simulating from the prior is often poor in efficiency:
Either modify the proposal distribution on θ to increase the density
of z’s within the vicinity of y...
[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]
...or by viewing the problem as a conditional density estimation
and by developing techniques to allow for larger
[Beaumont et al., 2002]
.....or even by including in the inferential framework [ABCµ]
[Ratmann et al., 2009]
ABC-NP
Better usage of [prior] simulations by
adjustement: instead of throwing away
θ such that ρ(η(z), η(y)) > , replace
θ’s with locally regressed transforms
θ∗
= θ − {η(z) − η(y)}T ˆβ
[Csill´ery et al., TEE, 2010]
where ˆβ is obtained by [NP] weighted least square regression on
(η(z) − η(y)) with weights
Kδ {ρ(η(z), η(y))}
[Beaumont et al., 2002, Genetics]
ABC-NP (regression)
Also found in the subsequent literature, e.g. in Fearnhead-Prangle (2012) :
weight directly simulation by
Kδ {ρ(η(z(θ)), η(y))}
or
1
S
S
s=1
Kδ {ρ(η(zs
(θ)), η(y))}
[consistent estimate of f (η|θ)]
ABC-NP (regression)
Also found in the subsequent literature, e.g. in Fearnhead-Prangle (2012) :
weight directly simulation by
Kδ {ρ(η(z(θ)), η(y))}
or
1
S
S
s=1
Kδ {ρ(η(zs
(θ)), η(y))}
[consistent estimate of f (η|θ)]
Curse of dimensionality: poor estimate when d = dim(η) is large...
ABC-NP (regression)
Use of the kernel weights
Kδ {ρ(η(z(θ)), η(y))}
leads to the NP estimate of the posterior expectation
i θi Kδ {ρ(η(z(θi )), η(y))}
i Kδ {ρ(η(z(θi )), η(y))}
[Blum, JASA, 2010]
ABC-NP (regression)
Use of the kernel weights
Kδ {ρ(η(z(θ)), η(y))}
leads to the NP estimate of the posterior conditional density
i
˜Kb(θi − θ)Kδ {ρ(η(z(θi )), η(y))}
i Kδ {ρ(η(z(θi )), η(y))}
[Blum, JASA, 2010]
ABC-NP (regression)
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
180 190 200 210 220
1.71.81.92.02.12.22.3
ABC and regression adjustment
S
theta
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
● ●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
−ε +ε
s_obs
●
●
●
●
ABC-NP (density estimations)
Other versions incorporating regression adjustments
i
˜Kb(θ∗
i − θ)Kδ {ρ(η(z(θi )), η(y))}
i Kδ {ρ(η(z(θi )), η(y))}
ABC-NP (density estimations)
Other versions incorporating regression adjustments
i
˜Kb(θ∗
i − θ)Kδ {ρ(η(z(θi )), η(y))}
i Kδ {ρ(η(z(θi )), η(y))}
In all cases, error
E[ˆg(θ|y)] − g(θ|y) = cb2
+ cδ2
+ OP(b2
+ δ2
) + OP(1/nδd
)
var(ˆg(θ|y)) =
c
nbδd
(1 + oP(1))
[Blum, JASA, 2010]
ABC-NP (density estimations)
Other versions incorporating regression adjustments
i
˜Kb(θ∗
i − θ)Kδ {ρ(η(z(θi )), η(y))}
i Kδ {ρ(η(z(θi )), η(y))}
In all cases, error
E[ˆg(θ|y)] − g(θ|y) = cb2
+ cδ2
+ OP(b2
+ δ2
) + OP(1/nδd
)
var(ˆg(θ|y)) =
c
nbδd
(1 + oP(1))
[standard NP calculations]
ABC-NCH
Incorporating non-linearities and heterocedasticities:
θ∗
= ˆm(η(y)) + [θ − ˆm(η(z))]
ˆσ(η(y))
ˆσ(η(z))
ABC-NCH
Incorporating non-linearities and heterocedasticities:
θ∗
= ˆm(η(y)) + [θ − ˆm(η(z))]
ˆσ(η(y))
ˆσ(η(z))
where
• ˆm(η) estimated by non-linear regression (e.g., neural network)
• ˆσ(η) estimated by non-linear regression on residuals
log{θi − ˆm(ηi )}2
= log σ2
(ηi ) + ξi
[Blum & Fran¸cois, 2009]
ABC-NCH (2)
Why neural network?
ABC-NCH (2)
Why neural network?
• fights curse of dimensionality
• selects relevant summary statistics
• provides automated dimension reduction
• offers a model choice capability
• improves upon multinomial logistic
[Blum & Fran¸cois, 2009]
ABC-MCMC
Markov chain (θ(t)) created via the transition function
θ(t+1)
=



θ ∼ Kω(θ |θ(t)) if x ∼ f (x|θ ) is such that x = y
and u ∼ U(0, 1) ≤ π(θ )Kω(θ(t)|θ )
π(θ(t))Kω(θ |θ(t))
,
θ(t) otherwise,
ABC-MCMC
Markov chain (θ(t)) created via the transition function
θ(t+1)
=



θ ∼ Kω(θ |θ(t)) if x ∼ f (x|θ ) is such that x = y
and u ∼ U(0, 1) ≤ π(θ )Kω(θ(t)|θ )
π(θ(t))Kω(θ |θ(t))
,
θ(t) otherwise,
has the posterior π(θ|y) as stationary distribution
[Marjoram et al, 2003]
ABC-MCMC (2)
Algorithm 2 Likelihood-free MCMC sampler
Use Algorithm 1 to get (θ(0), z(0))
for t = 1 to N do
Generate θ from Kω ·|θ(t−1) ,
Generate z from the likelihood f (·|θ ),
Generate u from U[0,1],
if u ≤ π(θ )Kω(θ(t−1)|θ )
π(θ(t−1)Kω(θ |θ(t−1))
IA ,y (z ) then
set (θ(t), z(t)) = (θ , z )
else
(θ(t), z(t))) = (θ(t−1), z(t−1)),
end if
end for
Why does it work?
Acceptance probability does not involve calculating the likelihood
and
π (θ , z |y)
π (θ(t−1)
, z(t−1)|y)
×
q(θ(t−1)
|θ )f (z(t−1)|θ(t−1)
)
q(θ |θ(t−1)
)f (z |θ )
=
π(θ ) XXXXf (z |θ ) IA ,y (z )
π(θ(t−1)
) f (z(t−1)|θ(t−1)
) IA ,y (z(t−1))
×
q(θ(t−1)
|θ ) f (z(t−1)|θ(t−1)
)
q(θ |θ(t−1)
) XXXXf (z |θ )
Why does it work?
Acceptance probability does not involve calculating the likelihood
and
π (θ , z |y)
π (θ(t−1)
, z(t−1)|y)
×
q(θ(t−1)
|θ )f (z(t−1)|θ(t−1)
)
q(θ |θ(t−1)
)f (z |θ )
=
π(θ ) XXXXf (z |θ ) IA ,y (z )
π(θ(t−1)
)((((((((hhhhhhhhf (z(t−1)|θ(t−1)
) IA ,y (z(t−1))
×
q(θ(t−1)
|θ ) ((((((((hhhhhhhhf (z(t−1)|θ(t−1)
)
q(θ |θ(t−1)
) XXXXf (z |θ )
Why does it work?
Acceptance probability does not involve calculating the likelihood
and
π (θ , z |y)
π (θ(t−1)
, z(t−1)|y)
×
q(θ(t−1)
|θ )f (z(t−1)|θ(t−1)
)
q(θ |θ(t−1)
)f (z |θ )
=
π(θ ) XXXXf (z |θ ) IA ,y (z )
π(θ(t−1)
)((((((((hhhhhhhhf (z(t−1)|θ(t−1)
)XXXXXXIA ,y (z(t−1))
×
q(θ(t−1)
|θ ) ((((((((hhhhhhhhf (z(t−1)|θ(t−1)
)
q(θ |θ(t−1)
) XXXXf (z |θ )
=
π(θ )q(θ(t−1)
|θ )
π(θ(t−1)
q(θ |θ(t−1)
)
IA ,y (z )
A toy example
Case of
x ∼
1
2
N(θ, 1) +
1
2
N(−θ, 1)
under prior θ ∼ N(0, 10)
A toy example
Case of
x ∼
1
2
N(θ, 1) +
1
2
N(−θ, 1)
under prior θ ∼ N(0, 10)
ABC sampler
thetas=rnorm(N,sd=10)
zed=sample(c(1,-1),N,rep=TRUE)*thetas+rnorm(N,sd=1)
eps=quantile(abs(zed-x),.01)
abc=thetas[abs(zed-x)eps]
A toy example
Case of
x ∼
1
2
N(θ, 1) +
1
2
N(−θ, 1)
under prior θ ∼ N(0, 10)
ABC-MCMC sampler
metas=rep(0,N)
metas[1]=rnorm(1,sd=10)
zed[1]=x
for (t in 2:N){
metas[t]=rnorm(1,mean=metas[t-1],sd=5)
zed[t]=rnorm(1,mean=(1-2*(runif(1).5))*metas[t],sd=1)
if ((abs(zed[t]-x)eps)||(runif(1)dnorm(metas[t],sd=10)/dnorm(metas[t-1],sd=10))){
metas[t]=metas[t-1]
zed[t]=zed[t-1]}
}
A toy example
x = 2
A toy example
x = 2
θ
−4 −2 0 2 4
0.000.050.100.150.200.250.30
A toy example
x = 10
A toy example
x = 10
θ
−10 −5 0 5 10
0.000.050.100.150.200.25
A toy example
x = 50
A toy example
x = 50
θ
−40 −20 0 20 40
0.000.020.040.060.080.10
A PMC version
Use of the same kernel idea as ABC-PRC (Sisson et al., 2007) but
with IS correction
Generate a sample at iteration t by
ˆπt(θ(t)
) ∝
N
j=1
ω
(t−1)
j Kt(θ(t)
|θ
(t−1)
j )
modulo acceptance of the associated xt, and use an importance
weight associated with an accepted simulation θ
(t)
i
ω
(t)
i ∝ π(θ
(t)
i ) ˆπt(θ
(t)
i ) .
c Still likelihood free
[Beaumont et al., 2009]
ABC-PMC algorithm
Given a decreasing sequence of approximation levels 1 ≥ . . . ≥ T ,
1. At iteration t = 1,
For i = 1, ..., N
Simulate θ
(1)
i ∼ π(θ) and x ∼ f (x|θ
(1)
i ) until (x, y)  1
Set ω
(1)
i = 1/N
Take τ2
as twice the empirical variance of the θ
(1)
i ’s
2. At iteration 2 ≤ t ≤ T,
For i = 1, ..., N, repeat
Pick θi from the θ
(t−1)
j ’s with probabilities ω
(t−1)
j
generate θ
(t)
i |θi ∼ N(θi , σ2
t ) and x ∼ f (x|θ
(t)
i )
until (x, y)  t
Set ω
(t)
i ∝ π(θ
(t)
i )/ N
j=1 ω
(t−1)
j ϕ σ−1
t θ
(t)
i − θ
(t−1)
j )
Take τ2
t+1 as twice the weighted empirical variance of the θ
(t)
i ’s
Sequential Monte Carlo
SMC is a simulation technique to approximate a sequence of
related probability distributions πn with π0 “easy” and πT as
target.
Iterated IS as PMC: particles moved from time n to time n via
kernel Kn and use of a sequence of extended targets ˜πn
˜πn(z0:n) = πn(zn)
n
j=0
Lj (zj+1, zj )
where the Lj ’s are backward Markov kernels [check that πn(zn) is a
marginal]
[Del Moral, Doucet  Jasra, Series B, 2006]
Sequential Monte Carlo (2)
Algorithm 3 SMC sampler
sample z
(0)
i ∼ γ0(x) (i = 1, . . . , N)
compute weights w
(0)
i = π0(z
(0)
i )/γ0(z
(0)
i )
for t = 1 to N do
if ESS(w(t−1))  NT then
resample N particles z(t−1) and set weights to 1
end if
generate z
(t−1)
i ∼ Kt(z
(t−1)
i , ·) and set weights to
w
(t)
i = w
(t−1)
i−1
πt(z
(t)
i ))Lt−1(z
(t)
i ), z
(t−1)
i ))
πt−1(z
(t−1)
i ))Kt(z
(t−1)
i ), z
(t)
i ))
end for
[Del Moral, Doucet  Jasra, Series B, 2006]
ABC-SMC
[Del Moral, Doucet  Jasra, 2009]
True derivation of an SMC-ABC algorithm
Use of a kernel Kn associated with target π n and derivation of the
backward kernel
Ln−1(z, z ) =
π n (z )Kn(z , z)
πn(z)
Update of the weights
win ∝ wi(n−1)
M
m=1 IA n
(xm
in )
M
m=1 IA n−1
(xm
i(n−1))
when xm
in ∼ K(xi(n−1), ·)
ABC-SMCM
Modification: Makes M repeated simulations of the pseudo-data z
given the parameter, rather than using a single [M = 1] simulation,
leading to weight that is proportional to the number of accepted
zi s
ω(θ) =
1
M
M
i=1
Iρ(η(y),η(zi ))
[limit in M means exact simulation from (tempered) target]
Properties of ABC-SMC
The ABC-SMC method properly uses a backward kernel L(z, z ) to
simplify the importance weight and to remove the dependence on
the unknown likelihood from this weight. Update of importance
weights is reduced to the ratio of the proportions of surviving
particles
Major assumption: the forward kernel K is supposed to be invariant
against the true target [tempered version of the true posterior]
Properties of ABC-SMC
The ABC-SMC method properly uses a backward kernel L(z, z ) to
simplify the importance weight and to remove the dependence on
the unknown likelihood from this weight. Update of importance
weights is reduced to the ratio of the proportions of surviving
particles
Major assumption: the forward kernel K is supposed to be invariant
against the true target [tempered version of the true posterior]
Adaptivity in ABC-SMC algorithm only found in on-line
construction of the thresholds t, slowly enough to keep a large
number of accepted transitions
A mixture example (2)
Recovery of the target, whether using a fixed standard deviation of
τ = 0.15 or τ = 1/0.15, or a sequence of adaptive τt’s.
θθ
−3 −2 −1 0 1 2 3
0.00.20.40.60.81.0
θθ
−3 −2 −1 0 1 2 3
0.00.20.40.60.81.0
θθ
−3 −2 −1 0 1 2 3
0.00.20.40.60.81.0
θθ
−3 −2 −1 0 1 2 3
0.00.20.40.60.81.0
θθ
−3 −2 −1 0 1 2 3
0.00.20.40.60.81.0
ABC inference machine
1 Approximate Bayesian
computation
2 ABC as an inference machine
Exact ABC
ABCµ
Automated summary
statistic selection
3 ABC for model choice
4 Genetics of ABC
How Bayesian is aBc..?
• may be a convergent method of inference (meaningful?
sufficient? foreign?)
• approximation error unknown (w/o massive simulation)
• pragmatic/empirical B (there is no other solution!)
• many calibration issues (tolerance, distance, statistics)
• the NP side should be incorporated into the whole B picture
• the approximation error should also be part of the B inference
Wilkinson’s exact (A)BC
ABC approximation error (i.e. non-zero tolerance) replaced with
exact simulation from a controlled approximation to the target,
convolution of true posterior with kernel function
π (θ, z|y) =
π(θ)f (z|θ)K (y − z)
π(θ)f (z|θ)K (y − z)dzdθ
,
with K kernel parameterised by bandwidth .
[Wilkinson, 2013]
Wilkinson’s exact (A)BC
ABC approximation error (i.e. non-zero tolerance) replaced with
exact simulation from a controlled approximation to the target,
convolution of true posterior with kernel function
π (θ, z|y) =
π(θ)f (z|θ)K (y − z)
π(θ)f (z|θ)K (y − z)dzdθ
,
with K kernel parameterised by bandwidth .
[Wilkinson, 2013]
Theorem
The ABC algorithm based on the assumption of a randomised
observation y = ˜y + ξ, ξ ∼ K , and an acceptance probability of
K (y − z)/M
gives draws from the posterior distribution π(θ|y).
How exact a BC?
“Using to represent measurement error is
straightforward, whereas using to model the model
discrepancy is harder to conceptualize and not as
commonly used”
[Richard Wilkinson, 2013]
How exact a BC?
Pros
• Pseudo-data from true model and observed data from noisy
model
• Interesting perspective in that outcome is completely
controlled
• Link with ABCµ and assuming y is observed with a
measurement error with density K
• Relates to the theory of model approximation
[Kennedy  O’Hagan, 2001]
Cons
• Requires K to be bounded by M
• True approximation error never assessed
• Requires a modification of the standard ABC algorithm
Noisy ABC
Idea: Modify the data from the start
˜y = y0 + ζ1
with the same scale as ABC
[ see Fearnhead-Prangle ]
run ABC on ˜y
Noisy ABC
Idea: Modify the data from the start
˜y = y0 + ζ1
with the same scale as ABC
[ see Fearnhead-Prangle ]
run ABC on ˜y
Then ABC produces an exact simulation from π(θ|˜y) = π(θ|˜y)
[Dean et al., 2011; Fearnhead and Prangle, 2012]
Consistent noisy ABC
• Degrading the data improves the estimation performances:
• Noisy ABC-MLE is asymptotically (in n) consistent
• under further assumptions, the noisy ABC-MLE is
asymptotically normal
• increase in variance of order −2
• likely degradation in precision or computing time due to the
lack of summary statistic [curse of dimensionality]
ABCµ
[Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]
Use of a joint density
f (θ, |y) ∝ ξ( |y, θ) × πθ(θ) × π ( )
where y is the data, and ξ( |y, θ) is the prior predictive density of
ρ(η(z), η(y)) given θ and y when z ∼ f (z|θ)
ABCµ
[Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]
Use of a joint density
f (θ, |y) ∝ ξ( |y, θ) × πθ(θ) × π ( )
where y is the data, and ξ( |y, θ) is the prior predictive density of
ρ(η(z), η(y)) given θ and y when z ∼ f (z|θ)
Warning! Replacement of ξ( |y, θ) with a non-parametric kernel
approximation.
ABCµ details
Multidimensional distances ρk (k = 1, . . . , K) and errors
k = ρk(ηk(z), ηk(y)), with
k ∼ ξk( |y, θ) ≈ ˆξk( |y, θ) =
1
Bhk
b
K[{ k−ρk(ηk(zb), ηk(y))}/hk]
then used in replacing ξ( |y, θ) with mink
ˆξk( |y, θ)
ABCµ details
Multidimensional distances ρk (k = 1, . . . , K) and errors
k = ρk(ηk(z), ηk(y)), with
k ∼ ξk( |y, θ) ≈ ˆξk( |y, θ) =
1
Bhk
b
K[{ k−ρk(ηk(zb), ηk(y))}/hk]
then used in replacing ξ( |y, θ) with mink
ˆξk( |y, θ)
ABCµ involves acceptance probability
π(θ , )
π(θ, )
q(θ , θ)q( , )
q(θ, θ )q( , )
mink
ˆξk( |y, θ )
mink
ˆξk( |y, θ)
ABCµ multiple errors
[ c Ratmann et al., PNAS, 2009]
ABCµ for model choice
[ c Ratmann et al., PNAS, 2009]
Semi-automatic ABC
Fearnhead and Prangle (2012) study ABC and the selection of the
summary statistic in close proximity to Wilkinson’s proposal
ABC is then considered from a purely inferential viewpoint and
calibrated for estimation purposes
Use of a randomised (or ‘noisy’) version of the summary statistics
˜η(y) = η(y) + τ
Derivation of a well-calibrated version of ABC, i.e. an algorithm
that gives proper predictions for the distribution associated with
this randomised summary statistic
Summary statistics
Main results:
• Optimality of the posterior expectation E[θ|y] of the
parameter of interest as summary statistics η(y)!
Summary statistics
Main results:
• Optimality of the posterior expectation E[θ|y] of the
parameter of interest as summary statistics η(y)!
• Use of the standard quadratic loss function
(θ − θ0)T
A(θ − θ0) .
bare summary
Details on Fearnhead and Prangle (FP) ABC
Use of a summary statistic S(·), an importance proposal g(·), a
kernel K(·) ≤ 1 and a bandwidth h  0 such that
(θ, ysim) ∼ g(θ)f (ysim|θ)
is accepted with probability (hence the bound)
K[{S(ysim) − sobs}/h]
and the corresponding importance weight defined by
π(θ) g(θ)
[Fearnhead  Prangle, 2012]
Average acceptance asymptotics
For the average acceptance probability/approximate likelihood
p(θ|sobs) = f (ysim|θ) K[{S(ysim) − sobs}/h] dysim ,
overall acceptance probability
p(sobs) = p(θ|sobs) π(θ) dθ = π(sobs)hd
+ o(hd
)
[FP, Lemma 1]
Calibration of h
“This result gives insight into how S(·) and h affect the Monte
Carlo error. To minimize Monte Carlo error, we need hd
to be not
too small. Thus ideally we want S(·) to be a low dimensional
summary of the data that is sufficiently informative about θ that
π(θ|sobs) is close, in some sense, to π(θ|yobs)” (FP, p.5)
• turns h into an absolute value while it should be
context-dependent and user-calibrated
• only addresses one term in the approximation error and
acceptance probability (“curse of dimensionality”)
• h large prevents πABC(θ|sobs) to be close to π(θ|sobs)
• d small prevents π(θ|sobs) to be close to π(θ|yobs) (“curse of
[dis]information”)
Converging ABC
Theorem (FP)
For noisy ABC, under identifiability assumptions, the expected
noisy-ABC log-likelihood,
E {log[p(θ|sobs)]} = log[p(θ|S(yobs) + )]π(yobs|θ0)K( )dyobsd ,
has its maximum at θ = θ0.
Converging ABC
Corollary
For noisy ABC, under regularity constraints on summary statistics,
the ABC posterior converges onto a point mass on the true
parameter value as m → ∞.
Loss motivated statistic
Under quadratic loss function,
Theorem (FP)
(i) The minimal posterior error E[L(θ, ˆθ)|yobs] occurs when
ˆθ = E(θ|yobs) (!)
(ii) When h → 0, EABC(θ|sobs) converges to E(θ|yobs)
(iii) If S(yobs) = E[θ|yobs] then for ˆθ = EABC[θ|sobs]
E[L(θ, ˆθ)|yobs] = trace(AΣ) + h2
xT
AxK(x)dx + o(h2
).
Optimal summary statistic
“We take a different approach, and weaken the requirement for
πABC to be a good approximation to π(θ|yobs). We argue for πABC
to be a good approximation solely in terms of the accuracy of
certain estimates of the parameters.” (FP, p.5)
From this result, FP
• derive their choice of summary statistic,
S(y) = E(θ|y)
[almost sufficient]
• suggest
h = O(N−1/(2+d)
) and h = O(N−1/(4+d)
)
as optimal bandwidths for noisy and standard ABC.
Optimal summary statistic
“We take a different approach, and weaken the requirement for
πABC to be a good approximation to π(θ|yobs). We argue for πABC
to be a good approximation solely in terms of the accuracy of
certain estimates of the parameters.” (FP, p.5)
From this result, FP
• derive their choice of summary statistic,
S(y) = E(θ|y)
[wow! EABC[θ|S(yobs)] = E[θ|yobs]]
• suggest
h = O(N−1/(2+d)
) and h = O(N−1/(4+d)
)
as optimal bandwidths for noisy and standard ABC.
Caveat
Since E(θ|yobs) is most usually unavailable, FP suggest
(i) use a pilot run of ABC to determine a region of non-negligible
posterior mass;
(ii) simulate sets of parameter values and data;
(iii) use the simulated sets of parameter values and data to
estimate the summary statistic; and
(iv) run ABC with this choice of summary statistic.
ABC for model choice
1 Approximate Bayesian computation
2 ABC as an inference machine
3 ABC for model choice
4 Genetics of ABC
Bayesian model choice
Several models M1, M2, . . . are considered simultaneously for a
dataset y and the model index M is part of the inference.
Use of a prior distribution. π(M = m), plus a prior distribution on
the parameter conditional on the value m of the model index,
πm(θm)
Goal is to derive the posterior distribution of M, challenging
computational target when models are complex.
Generic ABC for model choice
Algorithm 4 Likelihood-free model choice sampler (ABC-MC)
for t = 1 to T do
repeat
Generate m from the prior π(M = m)
Generate θm from the prior πm(θm)
Generate z from the model fm(z|θm)
until ρ{η(z), η(y)} 
Set m(t) = m and θ(t)
= θm
end for
ABC estimates
Posterior probability π(M = m|y) approximated by the frequency
of acceptances from model m
1
T
T
t=1
Im(t)=m .
Issues with implementation:
• should tolerances be the same for all models?
• should summary statistics vary across models (incl. their
dimension)?
• should the distance measure ρ vary as well?
ABC estimates
Posterior probability π(M = m|y) approximated by the frequency
of acceptances from model m
1
T
T
t=1
Im(t)=m .
Extension to a weighted polychotomous logistic regression estimate
of π(M = m|y), with non-parametric kernel weights
[Cornuet et al., DIYABC, 2009]
The Great ABC controversy
On-going controvery in phylogeographic genetics about the validity
of using ABC for testing
Against: Templeton, 2008,
2009, 2010a, 2010b, 2010c
argues that nested hypotheses
cannot have higher probabilities
than nesting hypotheses (!)
The Great ABC controversy
On-going controvery in phylogeographic genetics about the validity
of using ABC for testing
Against: Templeton, 2008,
2009, 2010a, 2010b, 2010c
argues that nested hypotheses
cannot have higher probabilities
than nesting hypotheses (!)
Replies: Fagundes et al., 2008,
Beaumont et al., 2010, Berger et
al., 2010, Csill`ery et al., 2010
point out that the criticisms are
addressed at [Bayesian]
model-based inference and have
nothing to do with ABC...
Back to sufficiency
If η1(x) sufficient statistic for model m = 1 and parameter θ1 and
η2(x) sufficient statistic for model m = 2 and parameter θ2,
(η1(x), η2(x)) is not always sufficient for (m, θm)
Back to sufficiency
If η1(x) sufficient statistic for model m = 1 and parameter θ1 and
η2(x) sufficient statistic for model m = 2 and parameter θ2,
(η1(x), η2(x)) is not always sufficient for (m, θm)
c Potential loss of information at the testing level
Limiting behaviour of B12 (T → ∞)
ABC approximation
B12(y) =
T
t=1 Imt =1 Iρ{η(zt ),η(y)}≤
T
t=1 Imt =2 Iρ{η(zt ),η(y)}≤
,
where the (mt, zt)’s are simulated from the (joint) prior
Limiting behaviour of B12 (T → ∞)
ABC approximation
B12(y) =
T
t=1 Imt =1 Iρ{η(zt ),η(y)}≤
T
t=1 Imt =2 Iρ{η(zt ),η(y)}≤
,
where the (mt, zt)’s are simulated from the (joint) prior
As T go to infinity, limit
B12(y) =
Iρ{η(z),η(y)}≤ π1(θ1)f1(z|θ1) dz dθ1
Iρ{η(z),η(y)}≤ π2(θ2)f2(z|θ2) dz dθ2
=
Iρ{η,η(y)}≤ π1(θ1)f η
1 (η|θ1) dη dθ1
Iρ{η,η(y)}≤ π2(θ2)f η
2 (η|θ2) dη dθ2
,
where f η
1 (η|θ1) and f η
2 (η|θ2) distributions of η(z)
Limiting behaviour of B12 ( → 0)
When goes to zero,
Bη
12(y) =
π1(θ1)f η
1 (η(y)|θ1) dθ1
π2(θ2)f η
2 (η(y)|θ2) dθ2
,
Limiting behaviour of B12 ( → 0)
When goes to zero,
Bη
12(y) =
π1(θ1)f η
1 (η(y)|θ1) dθ1
π2(θ2)f η
2 (η(y)|θ2) dθ2
,
c Bayes factor based on the sole observation of η(y)
Limiting behaviour of B12 (under sufficiency)
If η(y) sufficient statistic for both models,
fi (y|θi ) = gi (y)f η
i (η(y)|θi )
Thus
B12(y) = Θ1
π(θ1)g1(y)f η
1 (η(y)|θ1) dθ1
Θ2
π(θ2)g2(y)f η
2 (η(y)|θ2) dθ2
=
g1(y) π1(θ1)f η
1 (η(y)|θ1) dθ1
g2(y) π2(θ2)f η
2 (η(y)|θ2) dθ2
=
g1(y)
g2(y)
Bη
12(y) .
[Didelot, Everitt, Johansen  Lawson, 2011]
Limiting behaviour of B12 (under sufficiency)
If η(y) sufficient statistic for both models,
fi (y|θi ) = gi (y)f η
i (η(y)|θi )
Thus
B12(y) = Θ1
π(θ1)g1(y)f η
1 (η(y)|θ1) dθ1
Θ2
π(θ2)g2(y)f η
2 (η(y)|θ2) dθ2
=
g1(y) π1(θ1)f η
1 (η(y)|θ1) dθ1
g2(y) π2(θ2)f η
2 (η(y)|θ2) dθ2
=
g1(y)
g2(y)
Bη
12(y) .
[Didelot, Everitt, Johansen  Lawson, 2011]
c No discrepancy only when cross-model sufficiency
MA(q) divergence
1 2
0.00.20.40.60.81.0
1 2
0.00.20.40.60.81.0
1 2
0.00.20.40.60.81.0
1 2
0.00.20.40.60.81.0
Evolution [against ] of ABC Bayes factor, in terms of frequencies of
visits to models MA(1) (left) and MA(2) (right) when equal to
10, 1, .1, .01% quantiles on insufficient autocovariance distances. Sample
of 50 points from a MA(2) with θ1 = 0.6, θ2 = 0.2. True Bayes factor
equal to 17.71.
MA(q) divergence
1 2
0.00.20.40.60.81.0
1 2
0.00.20.40.60.81.0
1 2
0.00.20.40.60.81.0
1 2
0.00.20.40.60.81.0
Evolution [against ] of ABC Bayes factor, in terms of frequencies of
visits to models MA(1) (left) and MA(2) (right) when equal to
10, 1, .1, .01% quantiles on insufficient autocovariance distances. Sample
of 50 points from a MA(1) model with θ1 = 0.6. True Bayes factor B21
equal to .004.
Further comments
‘There should be the possibility that for the same model,
but different (non-minimal) [summary] statistics (so
different η’s: η1 and η∗
1) the ratio of evidences may no
longer be equal to one.’
[Michael Stumpf, Jan. 28, 2011, ’Og]
Using different summary statistics [on different models] may
indicate the loss of information brought by each set but agreement
does not lead to trustworthy approximations.
LDA summaries for model choice
In parallel to F P semi-automatic ABC, selection of most
discriminant subvector out of a collection of summary statistics,
can be based on Linear Discriminant Analysis (LDA)
[Estoup  al., 2012, Mol. Ecol. Res.]
Solution now implemented in DIYABC.2
[Cornuet  al., 2008, Bioinf.; Estoup  al., 2013]
Implementation
Step 1: Take a subset of the α% (e.g., 1%) best simulations
from an ABC reference table usually including 106–109
simulations for each of the M compared scenarios/models
Selection based on normalized Euclidian distance computed
between observed and simulated raw summaries
Step 2: run LDA on this subset to transform summaries into
(M − 1) discriminant variables
Step 3: Estimation of the posterior probabilities of each
competing scenario/model by polychotomous local logistic
regression against the M − 1 most discriminant variables
[Cornuet  al., 2008, Bioinformatics]
Implementation
Step 1: Take a subset of the α% (e.g., 1%) best simulations
from an ABC reference table usually including 106–109
simulations for each of the M compared scenarios/models
Step 2: run LDA on this subset to transform summaries into
(M − 1) discriminant variables
When computing LDA functions, weight simulated data with an
Epanechnikov kernel
Step 3: Estimation of the posterior probabilities of each
competing scenario/model by polychotomous local logistic
regression against the M − 1 most discriminant variables
[Cornuet  al., 2008, Bioinformatics]
LDA advantages
• much faster computation of scenario probabilities via
polychotomous regression
• a (much) lower number of explanatory variables improves the
accuracy of the ABC approximation, reduces the tolerance
and avoids extra costs in constructing the reference table
• allows for a large collection of initial summaries
• ability to evaluate Type I and Type II errors on more complex
models [more on this later]
• LDA reduces correlation among explanatory variables
LDA advantages
• much faster computation of scenario probabilities via
polychotomous regression
• a (much) lower number of explanatory variables improves the
accuracy of the ABC approximation, reduces the tolerance
and avoids extra costs in constructing the reference table
• allows for a large collection of initial summaries
• ability to evaluate Type I and Type II errors on more complex
models [more on this later]
• LDA reduces correlation among explanatory variables
When available, using both simulated and real data sets, posterior
probabilities of scenarios computed from LDA-transformed and raw
summaries are strongly correlated
A stylised problem
Central question to the validation of ABC for model choice:
When is a Bayes factor based on an insufficient statistic T(y)
consistent?
A stylised problem
Central question to the validation of ABC for model choice:
When is a Bayes factor based on an insufficient statistic T(y)
consistent?
Note/warnin: c drawn on T(y) through BT
12(y) necessarily differs
from c drawn on y through B12(y)
[Marin, Pillai, X,  Rousseau, JRSS B, 2013]
A benchmark if toy example
Comparison suggested by referee of PNAS paper [thanks!]:
[X, Cornuet, Marin,  Pillai, Aug. 2011]
Model M1: y ∼ N(θ1, 1) opposed
to model M2: y ∼ L(θ2, 1/
√
2), Laplace distribution with mean θ2
and scale parameter 1/
√
2 (variance one).
Four possible statistics
1 sample mean y (sufficient for M1 if not M2);
2 sample median med(y) (insufficient);
3 sample variance var(y) (ancillary);
4 median absolute deviation mad(y) = med(|y − med(y)|);
A benchmark if toy example
Comparison suggested by referee of PNAS paper [thanks!]:
[X, Cornuet, Marin,  Pillai, Aug. 2011]
Model M1: y ∼ N(θ1, 1) opposed
to model M2: y ∼ L(θ2, 1/
√
2), Laplace distribution with mean θ2
and scale parameter 1/
√
2 (variance one).
q
q
q
q
q
q
q
q
q
q
q
Gauss Laplace
0.00.10.20.30.40.50.60.7
n=100
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
Gauss Laplace
0.00.20.40.60.81.0
n=100
Framework
Starting from sample
y = (y1, . . . , yn)
the observed sample, not necessarily iid with true distribution
y ∼ Pn
Summary statistics
T(y) = Tn
= (T1(y), T2(y), · · · , Td (y)) ∈ Rd
with true distribution Tn
∼ Gn.
Framework
c Comparison of
– under M1, y ∼ F1,n(·|θ1) where θ1 ∈ Θ1 ⊂ Rp1
– under M2, y ∼ F2,n(·|θ2) where θ2 ∈ Θ2 ⊂ Rp2
turned into
– under M1, T(y) ∼ G1,n(·|θ1), and θ1|T(y) ∼ π1(·|Tn
)
– under M2, T(y) ∼ G2,n(·|θ2), and θ2|T(y) ∼ π2(·|Tn
)
Assumptions
A collection of asymptotic “standard” assumptions:
[A1] is a standard central limit theorem under the true model with
asymptotic mean µ0
[A2] controls the large deviations of the estimator Tn
from the
model mean µ(θ)
[A3] is the standard prior mass condition found in Bayesian
asymptotics (di effective dimension of the parameter)
[A4] restricts the behaviour of the model density against the true
density
[Think CLT!]
Asymptotic marginals
Asymptotically, under [A1]–[A4]
mi (t) =
Θi
gi (t|θi ) πi (θi ) dθi
is such that
(i) if inf{|µi (θi ) − µ0|; θi ∈ Θi } = 0,
Cl vd−di
n ≤ mi (Tn
) ≤ Cuvd−di
n
and
(ii) if inf{|µi (θi ) − µ0|; θi ∈ Θi }  0
mi (Tn
) = oPn [vd−τi
n + vd−αi
n ].
Between-model consistency
Consequence of above is that asymptotic behaviour of the Bayes
factor is driven by the asymptotic mean value µ(θ) of Tn
under
both models. And only by this mean value!
Between-model consistency
Consequence of above is that asymptotic behaviour of the Bayes
factor is driven by the asymptotic mean value µ(θ) of Tn
under
both models. And only by this mean value!
Indeed, if
inf{|µ0 − µ2(θ2)|; θ2 ∈ Θ2} = inf{|µ0 − µ1(θ1)|; θ1 ∈ Θ1} = 0
then
Cl v
−(d1−d2)
n ≤ m1(Tn
) m2(Tn
) ≤ Cuv
−(d1−d2)
n ,
where Cl , Cu = OPn (1), irrespective of the true model.
c Only depends on the difference d1 − d2: no consistency
Between-model consistency
Consequence of above is that asymptotic behaviour of the Bayes
factor is driven by the asymptotic mean value µ(θ) of Tn
under
both models. And only by this mean value!
Else, if
inf{|µ0 − µ2(θ2)|; θ2 ∈ Θ2}  inf{|µ0 − µ1(θ1)|; θ1 ∈ Θ1} = 0
then
m1(Tn
)
m2(Tn
)
≥ Cu min v
−(d1−α2)
n , v
−(d1−τ2)
n
Checking for adequate statistics
Run a practical check of the relevance (or non-relevance) of Tn
null hypothesis that both models are compatible with the statistic
Tn
H0 : inf{|µ2(θ2) − µ0|; θ2 ∈ Θ2} = 0
against
H1 : inf{|µ2(θ2) − µ0|; θ2 ∈ Θ2}  0
testing procedure provides estimates of mean of Tn
under each
model and checks for equality
Checking in practice
• Under each model Mi , generate ABC sample θi,l , l = 1, · · · , L
• For each θi,l , generate yi,l ∼ Fi,n(·|ψi,l ), derive Tn
(yi,l ) and
compute
ˆµi =
1
L
L
l=1
Tn
(yi,l ), i = 1, 2 .
• Conditionally on Tn
(y),
√
L { ˆµi − Eπ
[µi (θi )|Tn
(y)]} N(0, Vi ),
• Test for a common mean
H0 : ˆµ1 ∼ N(µ0, V1) , ˆµ2 ∼ N(µ0, V2)
against the alternative of different means
H1 : ˆµi ∼ N(µi , Vi ), with µ1 = µ2 .
Toy example: Laplace versus Gauss
qqqqqqqqqqqqqqq
qqqqqqqqqq
q
qq
q
q
Gauss Laplace Gauss Laplace
010203040
Normalised χ2 without and with mad
Genetics of ABC
1 Approximate Bayesian computation
2 ABC as an inference machine
3 ABC for model choice
4 Genetics of ABC
Genetic background of ABC
ABC is a recent computational technique that only requires being
able to sample from the likelihood f (·|θ)
This technique stemmed from population genetics models, about
15 years ago, and population geneticists still contribute
significantly to methodological developments of ABC.
[Griffith  al., 1997; Tavar´e  al., 1999]
Population genetics
[Part derived from the teaching material of Raphael Leblois, ENS Lyon, November 2010]
• Describe the genotypes, estimate the alleles frequencies,
determine their distribution among individuals, populations
and between populations;
• Predict and understand the evolution of gene frequencies in
populations as a result of various factors.
c Analyses the effect of various evolutive forces (mutation, drift,
migration, selection) on the evolution of gene frequencies in time
and space.
Wright-Fisher model
Le modèle de Wright-Fisher
•! En l’absence de mutation et de
sélection, les fréquences
alléliques dérivent (augmentent
et diminuent) inévitablement
jusqu’à la fixation d’un allèle
•! La dérive conduit donc à la
perte de variation génétique à
l’intérieur des populations
• A population of constant
size, in which individuals
reproduce at the same time.
• Each gene in a generation is
a copy of a gene of the
previous generation.
• In the absence of mutation
and selection, allele
frequencies derive inevitably
until the fixation of an
allele.
Coalescent theory
[Kingman, 1982; Tajima, Tavar´e, tc]
!#$%'((')**+$,-'.'/010234%'.'5*$*%()23$156
!!7**+$,-',()5534%'  !7**+$,-'8,$)('5,'1,'9
:;;=7?@#   :ABC7#?@#
Coalescence theory interested in the genealogy of a sample of
genes back in time to the common ancestor of the sample.
Common ancestor
6
Timeofcoalescence
(T)
Modélisation du processus de dérive génétique
en “remontant dans le temps”
jusqu’à l’ancêtre commun d’un échantillon de gènes
Les différentes
lignées fusionnent
(coalescent) au fur
et à mesure que
l’on remonte vers le
passé
The different lineages merge when we go back in the past.
Neutral mutations
20
Sous l’hypothèse de neutralité des marqueurs génétiques étudiés,
les mutations sont indépendantes de la généalogie
i.e. la généalogie ne dépend que des processus démographiques
On construit donc la généalogie selon les paramètres
démographiques (ex. N),
puis on ajoute a posteriori les
mutations sur les différentes
branches, du MRCA au feuilles de
l’arbre
On obtient ainsi des données de
polymorphisme sous les modèles
démographiques et mutationnels
considérés
• Under the assumption of
neutrality, the mutations
are independent of the
genealogy.
• We construct the genealogy
according to the
demographic parameters,
then we add a posteriori the
mutations.
Neutral model at a given microsatellite locus, in a closed
panmictic population at equilibrium
Kingman’s genealogy
When time axis is
normalized,
T(k) ∼ Exp(k(k −1)/2)
Neutral model at a given microsatellite locus, in a closed
panmictic population at equilibrium
Kingman’s genealogy
When time axis is
normalized,
T(k) ∼ Exp(k(k −1)/2)
Mutations according to
the Simple stepwise
Mutation Model
(SMM)
• date of the mutations ∼
Poisson process with
intensity θ/2 over the
branches
Neutral model at a given microsatellite locus, in a closed
panmictic population at equilibrium
Observations: leafs of the tree
ˆθ =?
Kingman’s genealogy
When time axis is
normalized,
T(k) ∼ Exp(k(k −1)/2)
Mutations according to
the Simple stepwise
Mutation Model
(SMM)
• date of the mutations ∼
Poisson process with
intensity θ/2 over the
branches
• MRCA = 100
• independent mutations:
±1 with pr. 1/2
Much more interesting models. . .
• several independent locus
Independent gene genealogies and mutations
• different populations
linked by an evolutionary scenario made of divergences,
admixtures, migrations between populations, selection
pressure, etc.
• larger sample size
usually between 50 and 100 genes
Available population scenarios
Between populations: three types of events, backward in time
• the divergence is the fusion between two populations,
• the admixture is the split of a population into two parts,
• the migration allows the move of some lineages of a
population to another.
•
4
•
2
•
5
•
3
•
1
Lignée ancestrale
Présent
T5
T4
T3
T2
FIGURE 2.2: Exemple de généalogie de cinq individus issus d’une seule population fermée à l’équilibre. Les
individus échantillonnés sont représentés par les feuilles du dendrogramme, les durées inter-coalescences
T2, . . . , T5 sont indépendantes, et Tk est de loi exponentielle de paramètre k k - 1 /2.
Pop1 Pop2
Pop1
Divergence
(a)
t
t0
Pop1 Pop3 Pop2
Admixture
(b)
1 - rr
t
t0
m12
m21
Pop1 Pop2
Migration
(c)
t
t0
FIGURE 2.3: Représentations graphiques des trois types d’évènements inter-populationnels d’un scénario
démographique. Il existe deux familles d’évènements inter-populationnels. La première famille est simple,
elle correspond aux évènement inter-populationnels instantanés. C’est le cas d’une divergence ou d’une
admixture. (a) Deux populations qui évoluent pour se fusionner dans le cas d’une divergence. (b) Trois po-
pulations qui évoluent en parallèle pour une admixture. Pour cette situation, chacun des tubes représente
(on peut imaginer qu’il porte à l’intérieur) la généalogie de la population qui évolue indépendamment des
A complex scenario
The goal is to discriminate between different population scenarios
from a dataset of polymorphism (DNA sample) y observed at the
present time.
2.5 Conclusion 37
Divergence
Pop1
Ne1
Pop4
Ne4
Admixture
Pop3
Ne3
Pop6Ne6
Pop2
Ne2
Pop5Ne5
Migration
m
m0
t = 0
t5
t4
t0
4
Ne4
Ne0
4
t3
t2
t1
r 1 - r
1 - ss
FIGURE 2.1: Exemple d’un scénario évolutif complexe composé d’évènements inter-populationnels. Ce
scénario implique quatre populations échantillonnées Pop1, . . . , Pop4 et deux autres populations non-
observées Pop5 et Pop6. Les branches de ce schéma sont des tubes et le scénario démographique contraint
la généalogie à rester à l’intérieur de ces tubes. La migration entre les populations Pop3 et Pop4 sur la
0
Demo-genetic inference
Each model is characterized by a set of parameters θ that cover
historical (time divergence, admixture time ...), demographics
(population sizes, admixture rates, migration rates, ...) and genetic
(mutation rate, ...) factors
The goal is to estimate these parameters from a dataset of
polymorphism (DNA sample) y observed at the present time
Problem: most of the time, we can not calculate the likelihood of
the polymorphism data f (y|θ).
Untractable likelihood
Missing (too missing!) data structure:
f (y|θ) =
G
f (y|G, θ)f (G|θ)dG
The genealogies are considered as nuisance parameters.
This problematic thus differs from the phylogenetic approach
where the tree is the parameter of interesst.
A genuine example of application
94
!#$%'()*+,(-*.(/+0$'1)()$/+2!,03!
1/+*%*'4*+56(47()$/.+.1#+4*.+8-9':*.+
Pygmies populations: do they have a common origin? Is there a
lot of exchanges between pygmies and non-pygmies populations?
Scenarios under competition
96
!#$%'()*+,(-*.(/+0$'1)()$/+2!,03!
1/+*%*'4*+56(47()$/.+.1#+4*.+8-9':*.+
Différents scénarios possibles, choix de scenario par ABC
Verdu et al. 2009
Simulation results
Différents scénarios possibles, choix de scenari
Le scenario 1a est largement soutenu par rap
autres ! plaide pour une origine commune
!#$%'()*+,(-*.(/+0$'1)()$/+2!,03
1/+*%*'4*+56(47()$/.+.1#+4*.+8-9':*.
Différents scénarios possibles, choix de scenario par ABC
Le scenario 1a est largement soutenu par rapport aux
autres ! plaide pour une origine commune des
populations pygmées d’Afrique de l’Ouest
Verdu e
c Scenario 1A is chosen.
Most likely scenario
99
!#$%'()*+,(-*.(/+0$'1)()$/+2!,03!
1/+*%*'4*+56(47()$/.+.1#+4*.+8-9':*.+
Scénario
évolutif :
on « raconte »
une histoire à
partir de ces
inférences
Verdu et al. 2009
Instance of ecological questions [message in a beetle]
• How the Asian Ladybird
beetle arrived in Europe?
• Why does they swarm right
now?
• What are the routes of
invasion?
• How to get rid of them?
• Why did the chicken cross
the road?
[Lombaert  al., 2010, PLoS ONE]
beetles in forests
Worldwide invasion routes of Harmonia Axyridis
For each outbreak, the arrow indicates the most likely invasion
pathway and the associated posterior probability, with 95% credible
intervals in brackets
[Estoup et al., 2012, Molecular Ecology Res.]
Worldwide invasion routes of Harmonia Axyridis
For each outbreak, the arrow indicates the most likely invasion
pathway and the associated posterior probability, with 95% credible
intervals in brackets
[Estoup et al., 2012, Molecular Ecology Res.]
A population genetic illustration of ABC model choice
Two populations (1 and 2) having diverged at a fixed known time
in the past and third population (3) which diverged from one of
those two populations (models 1 and 2, respectively).
Observation of 50 diploid individuals/population genotyped at 5,
50 or 100 independent microsatellite loci.
Model 2
A population genetic illustration of ABC model choice
Two populations (1 and 2) having diverged at a fixed known time
in the past and third population (3) which diverged from one of
those two populations (models 1 and 2, respectively).
Observation of 50 diploid individuals/population genotyped at 5,
50 or 100 independent microsatellite loci.
Stepwise mutation model: the number of repeats of the mutated
gene increases or decreases by one. Mutation rate µ common to all
loci set to 0.005 (single parameter) with uniform prior distribution
µ ∼ U[0.0001, 0.01]
A population genetic illustration of ABC model choice
Summary statistics associated to the (δµ)2 distance
xl,i,j repeated number of allele in locus l = 1, . . . , L for individual
i = 1, . . . , 100 within the population j = 1, 2, 3. Then
(δµ)2
j1,j2
=
1
L
L
l=1

 1
100
100
i1=1
xl,i1,j1 −
1
100
100
i2=1
xl,i2,j2


2
.
A population genetic illustration of ABC model choice
For two copies of locus l with allele sizes xl,i,j1 and xl,i ,j2
, most
recent common ancestor at coalescence time τj1,j2 , gene genealogy
distance of 2τj1,j2 , hence number of mutations Poisson with
parameter 2µτj1,j2 . Therefore,
E xl,i,j1 − xl,i ,j2
2
|τj1,j2 = 2µτj1,j2
and
Model 1 Model 2
E (δµ)2
1,2 2µ1t 2µ2t
E (δµ)2
1,3 2µ1t 2µ2t
E (δµ)2
2,3 2µ1t 2µ2t
A population genetic illustration of ABC model choice
Thus,
• Bayes factor based only on distance (δµ)2
1,2 not convergent: if
µ1 = µ2, same expectation
• Bayes factor based only on distance (δµ)2
1,3 or (δµ)2
2,3 not
convergent: if µ1 = 2µ2 or 2µ1 = µ2 same expectation
• if two of the three distances are used, Bayes factor converges:
there is no (µ1, µ2) for which all expectations are equal
A population genetic illustration of ABC model choice
q
q q
5 50 100
0.00.40.8 DM2(12)
q
q
q
q
q
q
q
q
q
qq
q q
q
5 50 100
0.00.40.8
DM2(13)
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qqqqq
q
qq
q
qqqq
q
q
q
q
q
q
5 50 100
0.00.40.8
DM2(13)  DM2(23)
Posterior probabilities that the data is from model 1 for 5, 50
and 100 loci
A population genetic illustration of ABC model choice
qqqqq
DM2(12) DM2(13)  DM2(23)
020406080100120140
qqqqqqqqq
DM2(12) DM2(13)  DM2(23)
0.00.20.40.60.81.0
p−values

Más contenido relacionado

La actualidad más candente

Multiple estimators for Monte Carlo approximations
Multiple estimators for Monte Carlo approximationsMultiple estimators for Monte Carlo approximations
Multiple estimators for Monte Carlo approximationsChristian Robert
 
Mark Girolami's Read Paper 2010
Mark Girolami's Read Paper 2010Mark Girolami's Read Paper 2010
Mark Girolami's Read Paper 2010Christian Robert
 
Bayesian hybrid variable selection under generalized linear models
Bayesian hybrid variable selection under generalized linear modelsBayesian hybrid variable selection under generalized linear models
Bayesian hybrid variable selection under generalized linear modelsCaleb (Shiqiang) Jin
 
Approximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-LikelihoodsApproximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-LikelihoodsStefano Cabras
 
My data are incomplete and noisy: Information-reduction statistical methods f...
My data are incomplete and noisy: Information-reduction statistical methods f...My data are incomplete and noisy: Information-reduction statistical methods f...
My data are incomplete and noisy: Information-reduction statistical methods f...Umberto Picchini
 
A Tutorial of the EM-algorithm and Its Application to Outlier Detection
A Tutorial of the EM-algorithm and Its Application to Outlier DetectionA Tutorial of the EM-algorithm and Its Application to Outlier Detection
A Tutorial of the EM-algorithm and Its Application to Outlier DetectionKonkuk University, Korea
 
Bayesian inference on mixtures
Bayesian inference on mixturesBayesian inference on mixtures
Bayesian inference on mixturesChristian Robert
 
Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013
Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013
Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013Christian Robert
 
better together? statistical learning in models made of modules
better together? statistical learning in models made of modulesbetter together? statistical learning in models made of modules
better together? statistical learning in models made of modulesChristian Robert
 
NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)Christian Robert
 
Slides econometrics-2018-graduate-2
Slides econometrics-2018-graduate-2Slides econometrics-2018-graduate-2
Slides econometrics-2018-graduate-2Arthur Charpentier
 
The Magic of Auto Differentiation
The Magic of Auto DifferentiationThe Magic of Auto Differentiation
The Magic of Auto DifferentiationSanyam Kapoor
 
A Quick and Terse Introduction to Efficient Frontier Mathematics
A Quick and Terse Introduction to Efficient Frontier MathematicsA Quick and Terse Introduction to Efficient Frontier Mathematics
A Quick and Terse Introduction to Efficient Frontier MathematicsAshwin Rao
 

La actualidad más candente (20)

Boston talk
Boston talkBoston talk
Boston talk
 
Multiple estimators for Monte Carlo approximations
Multiple estimators for Monte Carlo approximationsMultiple estimators for Monte Carlo approximations
Multiple estimators for Monte Carlo approximations
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
Mark Girolami's Read Paper 2010
Mark Girolami's Read Paper 2010Mark Girolami's Read Paper 2010
Mark Girolami's Read Paper 2010
 
Bayesian hybrid variable selection under generalized linear models
Bayesian hybrid variable selection under generalized linear modelsBayesian hybrid variable selection under generalized linear models
Bayesian hybrid variable selection under generalized linear models
 
Approximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-LikelihoodsApproximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-Likelihoods
 
Lausanne 2019 #1
Lausanne 2019 #1Lausanne 2019 #1
Lausanne 2019 #1
 
My data are incomplete and noisy: Information-reduction statistical methods f...
My data are incomplete and noisy: Information-reduction statistical methods f...My data are incomplete and noisy: Information-reduction statistical methods f...
My data are incomplete and noisy: Information-reduction statistical methods f...
 
A Tutorial of the EM-algorithm and Its Application to Outlier Detection
A Tutorial of the EM-algorithm and Its Application to Outlier DetectionA Tutorial of the EM-algorithm and Its Application to Outlier Detection
A Tutorial of the EM-algorithm and Its Application to Outlier Detection
 
Bayesian inference on mixtures
Bayesian inference on mixturesBayesian inference on mixtures
Bayesian inference on mixtures
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
 
Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013
Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013
Discussion of ABC talk by Stefano Cabras, Padova, March 21, 2013
 
Intractable likelihoods
Intractable likelihoodsIntractable likelihoods
Intractable likelihoods
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
better together? statistical learning in models made of modules
better together? statistical learning in models made of modulesbetter together? statistical learning in models made of modules
better together? statistical learning in models made of modules
 
NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Slides econometrics-2018-graduate-2
Slides econometrics-2018-graduate-2Slides econometrics-2018-graduate-2
Slides econometrics-2018-graduate-2
 
The Magic of Auto Differentiation
The Magic of Auto DifferentiationThe Magic of Auto Differentiation
The Magic of Auto Differentiation
 
A Quick and Terse Introduction to Efficient Frontier Mathematics
A Quick and Terse Introduction to Efficient Frontier MathematicsA Quick and Terse Introduction to Efficient Frontier Mathematics
A Quick and Terse Introduction to Efficient Frontier Mathematics
 

Destacado

from model uncertainty to ABC
from model uncertainty to ABCfrom model uncertainty to ABC
from model uncertainty to ABCChristian Robert
 
Computational methods for Bayesian model choice
Computational methods for Bayesian model choiceComputational methods for Bayesian model choice
Computational methods for Bayesian model choiceChristian Robert
 
Testing as estimation: the demise of the Bayes factor
Testing as estimation: the demise of the Bayes factorTesting as estimation: the demise of the Bayes factor
Testing as estimation: the demise of the Bayes factorChristian Robert
 
Reliable ABC model choice via random forests
Reliable ABC model choice via random forestsReliable ABC model choice via random forests
Reliable ABC model choice via random forestsChristian Robert
 
Computational tools for Bayesian model choice
Computational tools for Bayesian model choiceComputational tools for Bayesian model choice
Computational tools for Bayesian model choiceChristian Robert
 
Chapter 0: the what and why of statistics
Chapter 0: the what and why of statisticsChapter 0: the what and why of statistics
Chapter 0: the what and why of statisticsChristian Robert
 
Montpellier Math Colloquium
Montpellier Math ColloquiumMontpellier Math Colloquium
Montpellier Math ColloquiumChristian Robert
 
Statistics (1): estimation, Chapter 1: Models
Statistics (1): estimation, Chapter 1: ModelsStatistics (1): estimation, Chapter 1: Models
Statistics (1): estimation, Chapter 1: ModelsChristian Robert
 
Unbiased Bayes for Big Data
Unbiased Bayes for Big DataUnbiased Bayes for Big Data
Unbiased Bayes for Big DataChristian Robert
 
Tutorial on testing at O'Bayes 2015, Valencià, June 1, 2015
Tutorial on testing at O'Bayes 2015, Valencià, June 1, 2015Tutorial on testing at O'Bayes 2015, Valencià, June 1, 2015
Tutorial on testing at O'Bayes 2015, Valencià, June 1, 2015Christian Robert
 
no U-turn sampler, a discussion of Hoffman & Gelman NUTS algorithm
no U-turn sampler, a discussion of Hoffman & Gelman NUTS algorithmno U-turn sampler, a discussion of Hoffman & Gelman NUTS algorithm
no U-turn sampler, a discussion of Hoffman & Gelman NUTS algorithmChristian Robert
 
Chapter 4: Decision theory and Bayesian analysis
Chapter 4: Decision theory and Bayesian analysisChapter 4: Decision theory and Bayesian analysis
Chapter 4: Decision theory and Bayesian analysisChristian Robert
 
Theory of Probability revisited
Theory of Probability revisitedTheory of Probability revisited
Theory of Probability revisitedChristian Robert
 
On the vexing dilemma of hypothesis testing and the predicted demise of the B...
On the vexing dilemma of hypothesis testing and the predicted demise of the B...On the vexing dilemma of hypothesis testing and the predicted demise of the B...
On the vexing dilemma of hypothesis testing and the predicted demise of the B...Christian Robert
 
Can we estimate a constant?
Can we estimate a constant?Can we estimate a constant?
Can we estimate a constant?Christian Robert
 
Statistics (1): estimation Chapter 3: likelihood function and likelihood esti...
Statistics (1): estimation Chapter 3: likelihood function and likelihood esti...Statistics (1): estimation Chapter 3: likelihood function and likelihood esti...
Statistics (1): estimation Chapter 3: likelihood function and likelihood esti...Christian Robert
 
Delayed acceptance for Metropolis-Hastings algorithms
Delayed acceptance for Metropolis-Hastings algorithmsDelayed acceptance for Metropolis-Hastings algorithms
Delayed acceptance for Metropolis-Hastings algorithmsChristian Robert
 

Destacado (20)

from model uncertainty to ABC
from model uncertainty to ABCfrom model uncertainty to ABC
from model uncertainty to ABC
 
Computational methods for Bayesian model choice
Computational methods for Bayesian model choiceComputational methods for Bayesian model choice
Computational methods for Bayesian model choice
 
Testing as estimation: the demise of the Bayes factor
Testing as estimation: the demise of the Bayes factorTesting as estimation: the demise of the Bayes factor
Testing as estimation: the demise of the Bayes factor
 
Reliable ABC model choice via random forests
Reliable ABC model choice via random forestsReliable ABC model choice via random forests
Reliable ABC model choice via random forests
 
Computational tools for Bayesian model choice
Computational tools for Bayesian model choiceComputational tools for Bayesian model choice
Computational tools for Bayesian model choice
 
Chapter 0: the what and why of statistics
Chapter 0: the what and why of statisticsChapter 0: the what and why of statistics
Chapter 0: the what and why of statistics
 
Montpellier Math Colloquium
Montpellier Math ColloquiumMontpellier Math Colloquium
Montpellier Math Colloquium
 
Statistics (1): estimation, Chapter 1: Models
Statistics (1): estimation, Chapter 1: ModelsStatistics (1): estimation, Chapter 1: Models
Statistics (1): estimation, Chapter 1: Models
 
Unbiased Bayes for Big Data
Unbiased Bayes for Big DataUnbiased Bayes for Big Data
Unbiased Bayes for Big Data
 
Tutorial on testing at O'Bayes 2015, Valencià, June 1, 2015
Tutorial on testing at O'Bayes 2015, Valencià, June 1, 2015Tutorial on testing at O'Bayes 2015, Valencià, June 1, 2015
Tutorial on testing at O'Bayes 2015, Valencià, June 1, 2015
 
no U-turn sampler, a discussion of Hoffman & Gelman NUTS algorithm
no U-turn sampler, a discussion of Hoffman & Gelman NUTS algorithmno U-turn sampler, a discussion of Hoffman & Gelman NUTS algorithm
no U-turn sampler, a discussion of Hoffman & Gelman NUTS algorithm
 
Chapter 4: Decision theory and Bayesian analysis
Chapter 4: Decision theory and Bayesian analysisChapter 4: Decision theory and Bayesian analysis
Chapter 4: Decision theory and Bayesian analysis
 
Theory of Probability revisited
Theory of Probability revisitedTheory of Probability revisited
Theory of Probability revisited
 
Big model, big data
Big model, big dataBig model, big data
Big model, big data
 
Jere Koskela slides
Jere Koskela slidesJere Koskela slides
Jere Koskela slides
 
On the vexing dilemma of hypothesis testing and the predicted demise of the B...
On the vexing dilemma of hypothesis testing and the predicted demise of the B...On the vexing dilemma of hypothesis testing and the predicted demise of the B...
On the vexing dilemma of hypothesis testing and the predicted demise of the B...
 
Can we estimate a constant?
Can we estimate a constant?Can we estimate a constant?
Can we estimate a constant?
 
ISBA 2016: Foundations
ISBA 2016: FoundationsISBA 2016: Foundations
ISBA 2016: Foundations
 
Statistics (1): estimation Chapter 3: likelihood function and likelihood esti...
Statistics (1): estimation Chapter 3: likelihood function and likelihood esti...Statistics (1): estimation Chapter 3: likelihood function and likelihood esti...
Statistics (1): estimation Chapter 3: likelihood function and likelihood esti...
 
Delayed acceptance for Metropolis-Hastings algorithms
Delayed acceptance for Metropolis-Hastings algorithmsDelayed acceptance for Metropolis-Hastings algorithms
Delayed acceptance for Metropolis-Hastings algorithms
 

Similar a Ab cancun

3rd NIPS Workshop on PROBABILISTIC PROGRAMMING
3rd NIPS Workshop on PROBABILISTIC PROGRAMMING3rd NIPS Workshop on PROBABILISTIC PROGRAMMING
3rd NIPS Workshop on PROBABILISTIC PROGRAMMINGChristian Robert
 
Workshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael MartinWorkshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael MartinChristian Robert
 
Laplace's Demon: seminar #1
Laplace's Demon: seminar #1Laplace's Demon: seminar #1
Laplace's Demon: seminar #1Christian Robert
 
A nonlinear approximation of the Bayesian Update formula
A nonlinear approximation of the Bayesian Update formulaA nonlinear approximation of the Bayesian Update formula
A nonlinear approximation of the Bayesian Update formulaAlexander Litvinenko
 
Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...Valentin De Bortoli
 
Asymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de FranceAsymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de FranceChristian Robert
 
ABC short course: model choice chapter
ABC short course: model choice chapterABC short course: model choice chapter
ABC short course: model choice chapterChristian Robert
 
Stratified sampling and resampling for approximate Bayesian computation
Stratified sampling and resampling for approximate Bayesian computationStratified sampling and resampling for approximate Bayesian computation
Stratified sampling and resampling for approximate Bayesian computationUmberto Picchini
 
Bayesian Deep Learning
Bayesian Deep LearningBayesian Deep Learning
Bayesian Deep LearningRayKim51
 
ANU ASTR 4004 / 8004 Astronomical Computing : Lecture 9
ANU ASTR 4004 / 8004 Astronomical Computing : Lecture 9ANU ASTR 4004 / 8004 Astronomical Computing : Lecture 9
ANU ASTR 4004 / 8004 Astronomical Computing : Lecture 9tingyuansenastro
 
Fractional hot deck imputation - Jae Kim
Fractional hot deck imputation - Jae KimFractional hot deck imputation - Jae Kim
Fractional hot deck imputation - Jae KimJae-kwang Kim
 
Likelihood free computational statistics
Likelihood free computational statisticsLikelihood free computational statistics
Likelihood free computational statisticsPierre Pudlo
 

Similar a Ab cancun (20)

3rd NIPS Workshop on PROBABILISTIC PROGRAMMING
3rd NIPS Workshop on PROBABILISTIC PROGRAMMING3rd NIPS Workshop on PROBABILISTIC PROGRAMMING
3rd NIPS Workshop on PROBABILISTIC PROGRAMMING
 
Workshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael MartinWorkshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael Martin
 
the ABC of ABC
the ABC of ABCthe ABC of ABC
the ABC of ABC
 
Intro to ABC
Intro to ABCIntro to ABC
Intro to ABC
 
asymptotics of ABC
asymptotics of ABCasymptotics of ABC
asymptotics of ABC
 
Laplace's Demon: seminar #1
Laplace's Demon: seminar #1Laplace's Demon: seminar #1
Laplace's Demon: seminar #1
 
A nonlinear approximation of the Bayesian Update formula
A nonlinear approximation of the Bayesian Update formulaA nonlinear approximation of the Bayesian Update formula
A nonlinear approximation of the Bayesian Update formula
 
mcmc
mcmcmcmc
mcmc
 
Lecture11 xing
Lecture11 xingLecture11 xing
Lecture11 xing
 
Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...
 
MUMS: Bayesian, Fiducial, and Frequentist Conference - Model Selection in the...
MUMS: Bayesian, Fiducial, and Frequentist Conference - Model Selection in the...MUMS: Bayesian, Fiducial, and Frequentist Conference - Model Selection in the...
MUMS: Bayesian, Fiducial, and Frequentist Conference - Model Selection in the...
 
Asymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de FranceAsymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de France
 
ABC short course: model choice chapter
ABC short course: model choice chapterABC short course: model choice chapter
ABC short course: model choice chapter
 
Stratified sampling and resampling for approximate Bayesian computation
Stratified sampling and resampling for approximate Bayesian computationStratified sampling and resampling for approximate Bayesian computation
Stratified sampling and resampling for approximate Bayesian computation
 
Bayesian Deep Learning
Bayesian Deep LearningBayesian Deep Learning
Bayesian Deep Learning
 
ANU ASTR 4004 / 8004 Astronomical Computing : Lecture 9
ANU ASTR 4004 / 8004 Astronomical Computing : Lecture 9ANU ASTR 4004 / 8004 Astronomical Computing : Lecture 9
ANU ASTR 4004 / 8004 Astronomical Computing : Lecture 9
 
QMC: Transition Workshop - Probability Models for Discretization Uncertainty ...
QMC: Transition Workshop - Probability Models for Discretization Uncertainty ...QMC: Transition Workshop - Probability Models for Discretization Uncertainty ...
QMC: Transition Workshop - Probability Models for Discretization Uncertainty ...
 
Fractional hot deck imputation - Jae Kim
Fractional hot deck imputation - Jae KimFractional hot deck imputation - Jae Kim
Fractional hot deck imputation - Jae Kim
 
Likelihood free computational statistics
Likelihood free computational statisticsLikelihood free computational statistics
Likelihood free computational statistics
 
CLIM Fall 2017 Course: Statistics for Climate Research, Spatial Data: Models ...
CLIM Fall 2017 Course: Statistics for Climate Research, Spatial Data: Models ...CLIM Fall 2017 Course: Statistics for Climate Research, Spatial Data: Models ...
CLIM Fall 2017 Course: Statistics for Climate Research, Spatial Data: Models ...
 

Más de Christian Robert

How many components in a mixture?
How many components in a mixture?How many components in a mixture?
How many components in a mixture?Christian Robert
 
Testing for mixtures at BNP 13
Testing for mixtures at BNP 13Testing for mixtures at BNP 13
Testing for mixtures at BNP 13Christian Robert
 
Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?Christian Robert
 
Testing for mixtures by seeking components
Testing for mixtures by seeking componentsTesting for mixtures by seeking components
Testing for mixtures by seeking componentsChristian Robert
 
discussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihooddiscussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihoodChristian Robert
 
Coordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like samplerCoordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like samplerChristian Robert
 
Likelihood-free Design: a discussion
Likelihood-free Design: a discussionLikelihood-free Design: a discussion
Likelihood-free Design: a discussionChristian Robert
 
CISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergenceCISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergenceChristian Robert
 
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment modelsa discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment modelsChristian Robert
 
ABC based on Wasserstein distances
ABC based on Wasserstein distancesABC based on Wasserstein distances
ABC based on Wasserstein distancesChristian Robert
 
Poster for Bayesian Statistics in the Big Data Era conference
Poster for Bayesian Statistics in the Big Data Era conferencePoster for Bayesian Statistics in the Big Data Era conference
Poster for Bayesian Statistics in the Big Data Era conferenceChristian Robert
 
short course at CIRM, Bayesian Masterclass, October 2018
short course at CIRM, Bayesian Masterclass, October 2018short course at CIRM, Bayesian Masterclass, October 2018
short course at CIRM, Bayesian Masterclass, October 2018Christian Robert
 
ABC with Wasserstein distances
ABC with Wasserstein distancesABC with Wasserstein distances
ABC with Wasserstein distancesChristian Robert
 
prior selection for mixture estimation
prior selection for mixture estimationprior selection for mixture estimation
prior selection for mixture estimationChristian Robert
 

Más de Christian Robert (20)

discussion of ICML23.pdf
discussion of ICML23.pdfdiscussion of ICML23.pdf
discussion of ICML23.pdf
 
How many components in a mixture?
How many components in a mixture?How many components in a mixture?
How many components in a mixture?
 
restore.pdf
restore.pdfrestore.pdf
restore.pdf
 
Testing for mixtures at BNP 13
Testing for mixtures at BNP 13Testing for mixtures at BNP 13
Testing for mixtures at BNP 13
 
Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?
 
CDT 22 slides.pdf
CDT 22 slides.pdfCDT 22 slides.pdf
CDT 22 slides.pdf
 
Testing for mixtures by seeking components
Testing for mixtures by seeking componentsTesting for mixtures by seeking components
Testing for mixtures by seeking components
 
discussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihooddiscussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihood
 
Coordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like samplerCoordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like sampler
 
eugenics and statistics
eugenics and statisticseugenics and statistics
eugenics and statistics
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
 
Likelihood-free Design: a discussion
Likelihood-free Design: a discussionLikelihood-free Design: a discussion
Likelihood-free Design: a discussion
 
CISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergenceCISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergence
 
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment modelsa discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
 
ABC based on Wasserstein distances
ABC based on Wasserstein distancesABC based on Wasserstein distances
ABC based on Wasserstein distances
 
Poster for Bayesian Statistics in the Big Data Era conference
Poster for Bayesian Statistics in the Big Data Era conferencePoster for Bayesian Statistics in the Big Data Era conference
Poster for Bayesian Statistics in the Big Data Era conference
 
short course at CIRM, Bayesian Masterclass, October 2018
short course at CIRM, Bayesian Masterclass, October 2018short course at CIRM, Bayesian Masterclass, October 2018
short course at CIRM, Bayesian Masterclass, October 2018
 
ABC with Wasserstein distances
ABC with Wasserstein distancesABC with Wasserstein distances
ABC with Wasserstein distances
 
prior selection for mixture estimation
prior selection for mixture estimationprior selection for mixture estimation
prior selection for mixture estimation
 

Último

basic entomology with insect anatomy and taxonomy
basic entomology with insect anatomy and taxonomybasic entomology with insect anatomy and taxonomy
basic entomology with insect anatomy and taxonomyDrAnita Sharma
 
STOPPED FLOW METHOD & APPLICATION MURUGAVENI B.pptx
STOPPED FLOW METHOD & APPLICATION MURUGAVENI B.pptxSTOPPED FLOW METHOD & APPLICATION MURUGAVENI B.pptx
STOPPED FLOW METHOD & APPLICATION MURUGAVENI B.pptxMurugaveni B
 
FREE NURSING BUNDLE FOR NURSES.PDF by na
FREE NURSING BUNDLE FOR NURSES.PDF by naFREE NURSING BUNDLE FOR NURSES.PDF by na
FREE NURSING BUNDLE FOR NURSES.PDF by naJASISJULIANOELYNV
 
Citronella presentation SlideShare mani upadhyay
Citronella presentation SlideShare mani upadhyayCitronella presentation SlideShare mani upadhyay
Citronella presentation SlideShare mani upadhyayupadhyaymani499
 
Best Call Girls In Sector 29 Gurgaon❤️8860477959 EscorTs Service In 24/7 Delh...
Best Call Girls In Sector 29 Gurgaon❤️8860477959 EscorTs Service In 24/7 Delh...Best Call Girls In Sector 29 Gurgaon❤️8860477959 EscorTs Service In 24/7 Delh...
Best Call Girls In Sector 29 Gurgaon❤️8860477959 EscorTs Service In 24/7 Delh...lizamodels9
 
Base editing, prime editing, Cas13 & RNA editing and organelle base editing
Base editing, prime editing, Cas13 & RNA editing and organelle base editingBase editing, prime editing, Cas13 & RNA editing and organelle base editing
Base editing, prime editing, Cas13 & RNA editing and organelle base editingNetHelix
 
Call Girls In Nihal Vihar Delhi ❤️8860477959 Looking Escorts In 24/7 Delhi NCR
Call Girls In Nihal Vihar Delhi ❤️8860477959 Looking Escorts In 24/7 Delhi NCRCall Girls In Nihal Vihar Delhi ❤️8860477959 Looking Escorts In 24/7 Delhi NCR
Call Girls In Nihal Vihar Delhi ❤️8860477959 Looking Escorts In 24/7 Delhi NCRlizamodels9
 
THE ROLE OF PHARMACOGNOSY IN TRADITIONAL AND MODERN SYSTEM OF MEDICINE.pptx
THE ROLE OF PHARMACOGNOSY IN TRADITIONAL AND MODERN SYSTEM OF MEDICINE.pptxTHE ROLE OF PHARMACOGNOSY IN TRADITIONAL AND MODERN SYSTEM OF MEDICINE.pptx
THE ROLE OF PHARMACOGNOSY IN TRADITIONAL AND MODERN SYSTEM OF MEDICINE.pptxNandakishor Bhaurao Deshmukh
 
Pests of jatropha_Bionomics_identification_Dr.UPR.pdf
Pests of jatropha_Bionomics_identification_Dr.UPR.pdfPests of jatropha_Bionomics_identification_Dr.UPR.pdf
Pests of jatropha_Bionomics_identification_Dr.UPR.pdfPirithiRaju
 
Vision and reflection on Mining Software Repositories research in 2024
Vision and reflection on Mining Software Repositories research in 2024Vision and reflection on Mining Software Repositories research in 2024
Vision and reflection on Mining Software Repositories research in 2024AyushiRastogi48
 
Fertilization: Sperm and the egg—collectively called the gametes—fuse togethe...
Fertilization: Sperm and the egg—collectively called the gametes—fuse togethe...Fertilization: Sperm and the egg—collectively called the gametes—fuse togethe...
Fertilization: Sperm and the egg—collectively called the gametes—fuse togethe...D. B. S. College Kanpur
 
Carbon Dioxide Capture and Storage (CSS)
Carbon Dioxide Capture and Storage (CSS)Carbon Dioxide Capture and Storage (CSS)
Carbon Dioxide Capture and Storage (CSS)Tamer Koksalan, PhD
 
Speech, hearing, noise, intelligibility.pptx
Speech, hearing, noise, intelligibility.pptxSpeech, hearing, noise, intelligibility.pptx
Speech, hearing, noise, intelligibility.pptxpriyankatabhane
 
Call Girls in Munirka Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Munirka Delhi 💯Call Us 🔝8264348440🔝Call Girls in Munirka Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Munirka Delhi 💯Call Us 🔝8264348440🔝soniya singh
 
Davis plaque method.pptx recombinant DNA technology
Davis plaque method.pptx recombinant DNA technologyDavis plaque method.pptx recombinant DNA technology
Davis plaque method.pptx recombinant DNA technologycaarthichand2003
 
Behavioral Disorder: Schizophrenia & it's Case Study.pdf
Behavioral Disorder: Schizophrenia & it's Case Study.pdfBehavioral Disorder: Schizophrenia & it's Case Study.pdf
Behavioral Disorder: Schizophrenia & it's Case Study.pdfSELF-EXPLANATORY
 
Pests of Blackgram, greengram, cowpea_Dr.UPR.pdf
Pests of Blackgram, greengram, cowpea_Dr.UPR.pdfPests of Blackgram, greengram, cowpea_Dr.UPR.pdf
Pests of Blackgram, greengram, cowpea_Dr.UPR.pdfPirithiRaju
 
OECD bibliometric indicators: Selected highlights, April 2024
OECD bibliometric indicators: Selected highlights, April 2024OECD bibliometric indicators: Selected highlights, April 2024
OECD bibliometric indicators: Selected highlights, April 2024innovationoecd
 
Call Girls in Majnu Ka Tilla Delhi 🔝9711014705🔝 Genuine
Call Girls in Majnu Ka Tilla Delhi 🔝9711014705🔝 GenuineCall Girls in Majnu Ka Tilla Delhi 🔝9711014705🔝 Genuine
Call Girls in Majnu Ka Tilla Delhi 🔝9711014705🔝 Genuinethapagita
 

Último (20)

basic entomology with insect anatomy and taxonomy
basic entomology with insect anatomy and taxonomybasic entomology with insect anatomy and taxonomy
basic entomology with insect anatomy and taxonomy
 
STOPPED FLOW METHOD & APPLICATION MURUGAVENI B.pptx
STOPPED FLOW METHOD & APPLICATION MURUGAVENI B.pptxSTOPPED FLOW METHOD & APPLICATION MURUGAVENI B.pptx
STOPPED FLOW METHOD & APPLICATION MURUGAVENI B.pptx
 
FREE NURSING BUNDLE FOR NURSES.PDF by na
FREE NURSING BUNDLE FOR NURSES.PDF by naFREE NURSING BUNDLE FOR NURSES.PDF by na
FREE NURSING BUNDLE FOR NURSES.PDF by na
 
Volatile Oils Pharmacognosy And Phytochemistry -I
Volatile Oils Pharmacognosy And Phytochemistry -IVolatile Oils Pharmacognosy And Phytochemistry -I
Volatile Oils Pharmacognosy And Phytochemistry -I
 
Citronella presentation SlideShare mani upadhyay
Citronella presentation SlideShare mani upadhyayCitronella presentation SlideShare mani upadhyay
Citronella presentation SlideShare mani upadhyay
 
Best Call Girls In Sector 29 Gurgaon❤️8860477959 EscorTs Service In 24/7 Delh...
Best Call Girls In Sector 29 Gurgaon❤️8860477959 EscorTs Service In 24/7 Delh...Best Call Girls In Sector 29 Gurgaon❤️8860477959 EscorTs Service In 24/7 Delh...
Best Call Girls In Sector 29 Gurgaon❤️8860477959 EscorTs Service In 24/7 Delh...
 
Base editing, prime editing, Cas13 & RNA editing and organelle base editing
Base editing, prime editing, Cas13 & RNA editing and organelle base editingBase editing, prime editing, Cas13 & RNA editing and organelle base editing
Base editing, prime editing, Cas13 & RNA editing and organelle base editing
 
Call Girls In Nihal Vihar Delhi ❤️8860477959 Looking Escorts In 24/7 Delhi NCR
Call Girls In Nihal Vihar Delhi ❤️8860477959 Looking Escorts In 24/7 Delhi NCRCall Girls In Nihal Vihar Delhi ❤️8860477959 Looking Escorts In 24/7 Delhi NCR
Call Girls In Nihal Vihar Delhi ❤️8860477959 Looking Escorts In 24/7 Delhi NCR
 
THE ROLE OF PHARMACOGNOSY IN TRADITIONAL AND MODERN SYSTEM OF MEDICINE.pptx
THE ROLE OF PHARMACOGNOSY IN TRADITIONAL AND MODERN SYSTEM OF MEDICINE.pptxTHE ROLE OF PHARMACOGNOSY IN TRADITIONAL AND MODERN SYSTEM OF MEDICINE.pptx
THE ROLE OF PHARMACOGNOSY IN TRADITIONAL AND MODERN SYSTEM OF MEDICINE.pptx
 
Pests of jatropha_Bionomics_identification_Dr.UPR.pdf
Pests of jatropha_Bionomics_identification_Dr.UPR.pdfPests of jatropha_Bionomics_identification_Dr.UPR.pdf
Pests of jatropha_Bionomics_identification_Dr.UPR.pdf
 
Vision and reflection on Mining Software Repositories research in 2024
Vision and reflection on Mining Software Repositories research in 2024Vision and reflection on Mining Software Repositories research in 2024
Vision and reflection on Mining Software Repositories research in 2024
 
Fertilization: Sperm and the egg—collectively called the gametes—fuse togethe...
Fertilization: Sperm and the egg—collectively called the gametes—fuse togethe...Fertilization: Sperm and the egg—collectively called the gametes—fuse togethe...
Fertilization: Sperm and the egg—collectively called the gametes—fuse togethe...
 
Carbon Dioxide Capture and Storage (CSS)
Carbon Dioxide Capture and Storage (CSS)Carbon Dioxide Capture and Storage (CSS)
Carbon Dioxide Capture and Storage (CSS)
 
Speech, hearing, noise, intelligibility.pptx
Speech, hearing, noise, intelligibility.pptxSpeech, hearing, noise, intelligibility.pptx
Speech, hearing, noise, intelligibility.pptx
 
Call Girls in Munirka Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Munirka Delhi 💯Call Us 🔝8264348440🔝Call Girls in Munirka Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Munirka Delhi 💯Call Us 🔝8264348440🔝
 
Davis plaque method.pptx recombinant DNA technology
Davis plaque method.pptx recombinant DNA technologyDavis plaque method.pptx recombinant DNA technology
Davis plaque method.pptx recombinant DNA technology
 
Behavioral Disorder: Schizophrenia & it's Case Study.pdf
Behavioral Disorder: Schizophrenia & it's Case Study.pdfBehavioral Disorder: Schizophrenia & it's Case Study.pdf
Behavioral Disorder: Schizophrenia & it's Case Study.pdf
 
Pests of Blackgram, greengram, cowpea_Dr.UPR.pdf
Pests of Blackgram, greengram, cowpea_Dr.UPR.pdfPests of Blackgram, greengram, cowpea_Dr.UPR.pdf
Pests of Blackgram, greengram, cowpea_Dr.UPR.pdf
 
OECD bibliometric indicators: Selected highlights, April 2024
OECD bibliometric indicators: Selected highlights, April 2024OECD bibliometric indicators: Selected highlights, April 2024
OECD bibliometric indicators: Selected highlights, April 2024
 
Call Girls in Majnu Ka Tilla Delhi 🔝9711014705🔝 Genuine
Call Girls in Majnu Ka Tilla Delhi 🔝9711014705🔝 GenuineCall Girls in Majnu Ka Tilla Delhi 🔝9711014705🔝 Genuine
Call Girls in Majnu Ka Tilla Delhi 🔝9711014705🔝 Genuine
 

Ab cancun

  • 1. ABC methodology and applications Jean-Michel Marin & Christian P. Robert I3M, Universit´e Montpellier 2, Universit´e Paris-Dauphine, & University of Warwick ISBA 2014, Cancun. July 13
  • 2. Outline 1 Approximate Bayesian computation 2 ABC as an inference machine 3 ABC for model choice 4 Genetics of ABC
  • 3. Approximate Bayesian computation 1 Approximate Bayesian computation ABC basics Alphabet soup ABC-MCMC 2 ABC as an inference machine 3 ABC for model choice 4 Genetics of ABC
  • 4. Untractable likelihoods Cases when the likelihood function f (y|θ) is unavailable and when the completion step f (y|θ) = Z f (y, z|θ) dz is impossible or too costly because of the dimension of z c MCMC cannot be implemented!
  • 5. Untractable likelihoods Cases when the likelihood function f (y|θ) is unavailable and when the completion step f (y|θ) = Z f (y, z|θ) dz is impossible or too costly because of the dimension of z c MCMC cannot be implemented!
  • 6. The ABC method Bayesian setting: target is π(θ)f (y|θ)
  • 7. The ABC method Bayesian setting: target is π(θ)f (y|θ) When likelihood f (y|θ) not in closed form, likelihood-free rejection technique:
  • 8. The ABC method Bayesian setting: target is π(θ)f (y|θ) When likelihood f (y|θ) not in closed form, likelihood-free rejection technique: ABC algorithm For an observation y ∼ f (y|θ), under the prior π(θ), keep jointly simulating θ ∼ π(θ) , z ∼ f (z|θ ) , until the auxiliary variable z is equal to the observed value, z = y. [Tavar´e et al., 1997]
  • 9. Why does it work?! The proof is trivial: f (θi ) ∝ z∈D π(θi )f (z|θi )Iy(z) ∝ π(θi )f (y|θi ) = π(θi |y) . [Accept–Reject 101]
  • 10. Earlier occurrence ‘Bayesian statistics and Monte Carlo methods are ideally suited to the task of passing many models over one dataset’ [Don Rubin, Annals of Statistics, 1984] Note Rubin (1984) does not promote this algorithm for likelihood-free simulation but frequentist intuition on posterior distributions: parameters from posteriors are more likely to be those that could have generated the data.
  • 11. A as A...pproximative When y is a continuous random variable, equality z = y is replaced with a tolerance condition, (y, z) ≤ where is a distance
  • 12. A as A...pproximative When y is a continuous random variable, equality z = y is replaced with a tolerance condition, (y, z) ≤ where is a distance Output distributed from π(θ) Pθ{ (y, z) < } ∝ π(θ| (y, z) < ) [Pritchard et al., 1999]
  • 13. ABC algorithm Algorithm 1 Likelihood-free rejection sampler 2 for i = 1 to N do repeat generate θ from the prior distribution π(·) generate z from the likelihood f (·|θ ) until ρ{η(z), η(y)} ≤ set θi = θ end for where η(y) defines a (not necessarily sufficient) [summary] statistic
  • 14. Output The likelihood-free algorithm samples from the marginal in z of: π (θ, z|y) = π(θ)f (z|θ)IA ,y (z) A ,y×Θ π(θ)f (z|θ)dzdθ , where A ,y = {z ∈ D|ρ(η(z), η(y)) < }.
  • 15. Output The likelihood-free algorithm samples from the marginal in z of: π (θ, z|y) = π(θ)f (z|θ)IA ,y (z) A ,y×Θ π(θ)f (z|θ)dzdθ , where A ,y = {z ∈ D|ρ(η(z), η(y)) < }. The idea behind ABC is that the summary statistics coupled with a small tolerance should provide a good approximation of the posterior distribution: π (θ|y) = π (θ, z|y)dz ≈ π(θ|y) .
  • 16. A toy example Case of y|θ ∼ N1 2(θ + 2)θ(θ − 2), 0.1 + θ2 and θ ∼ U[−10,10] when y = 2 and ρ(y, z) = |y − z| [Richard Wilkinson, Tutorial at NIPS 2013]
  • 17. A toy example ✏ = 7.5 ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●●● ● ● ●● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● −3 −2 −1 0 1 2 3 −1001020 theta vs D theta D ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●●● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ●● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● −ε +ε D −3 −2 −1 0 1 2 3 0.00.20.40.60.81.01.21.4 Density theta Density ABC True = 7.5
  • 18. A toy example ✏ = 5 ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● −3 −2 −1 0 1 2 3 −1001020 theta vs D theta D ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ●●● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ●● ●●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● −ε +ε D −3 −2 −1 0 1 2 3 0.00.20.40.60.81.01.21.4 Density theta Density ABC True = 5
  • 19. A toy example ✏ = 2.5 ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ●● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● −3 −2 −1 0 1 2 3 −1001020 theta vs D theta D ● ● ● ● ●● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ●● ●● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ●● ● ● ● ● ● ●● ●●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ●● ●● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ●● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● −ε +ε D −3 −2 −1 0 1 2 3 0.00.20.40.60.81.01.21.4 Density theta Density ABC True = 2.5
  • 20. A toy example ✏ = 1 ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ●● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● −3 −2 −1 0 1 2 3 −1001020 theta vs D theta D ●● ● ●●● ● ●● ●●● ●● ● ●● ● ●● ●●● ● ● ● ●● ●● ● ●●● ● ●● ● ●● ●●● ● ●● ● ●● ● ● ●● ●● ● ● ●●● ● ● ●● ● ● ●●●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●●● ● ● ● ● ● ● ● ●● ●● −ε +ε −3 −2 −1 0 1 2 3 0.00.20.40.60.81.01.21.4 Density theta Density ABC True = 1
  • 21. ABC as knn [Biau et al., 2013, Annales de l’IHP] Practice of ABC: determine tolerance as a quantile on observed distances, say 10% or 1% quantile, = N = qα(d1, . . . , dN)
  • 22. ABC as knn [Biau et al., 2013, Annales de l’IHP] Practice of ABC: determine tolerance as a quantile on observed distances, say 10% or 1% quantile, = N = qα(d1, . . . , dN) • Interpretation of ε as nonparametric bandwidth only approximation of the actual practice [Blum & Fran¸cois, 2010]
  • 23. ABC as knn [Biau et al., 2013, Annales de l’IHP] Practice of ABC: determine tolerance as a quantile on observed distances, say 10% or 1% quantile, = N = qα(d1, . . . , dN) • Interpretation of ε as nonparametric bandwidth only approximation of the actual practice [Blum & Fran¸cois, 2010] • ABC is a k-nearest neighbour (knn) method with kN = N N [Loftsgaarden & Quesenberry, 1965]
  • 24. ABC consistency Provided kN/ log log N −→ ∞ and kN/N −→ 0 as N → ∞, for almost all s0 (with respect to the distribution of S), with probability 1, 1 kN kN j=1 ϕ(θj ) −→ E[ϕ(θj )|S = s0] [Devroye, 1982]
  • 25. ABC consistency Provided kN/ log log N −→ ∞ and kN/N −→ 0 as N → ∞, for almost all s0 (with respect to the distribution of S), with probability 1, 1 kN kN j=1 ϕ(θj ) −→ E[ϕ(θj )|S = s0] [Devroye, 1982] Biau et al. (2013) also recall pointwise and integrated mean square error consistency results on the corresponding kernel estimate of the conditional posterior distribution, under constraints kN → ∞, kN /N → 0, hN → 0 and hp N kN → ∞,
  • 26. Rates of convergence Further assumptions (on target and kernel) allow for precise (integrated mean square) convergence rates (as a power of the sample size N), derived from classical k-nearest neighbour regression, like • when m = 1, 2, 3, kN ≈ N(p+4)/(p+8) and rate N − 4 p+8 • when m = 4, kN ≈ N(p+4)/(p+8) and rate N − 4 p+8 log N • when m > 4, kN ≈ N(p+4)/(m+p+4) and rate N − 4 m+p+4 [Biau et al., 2013] where p dimension of θ and m dimension of summary statistics
  • 27. Rates of convergence Further assumptions (on target and kernel) allow for precise (integrated mean square) convergence rates (as a power of the sample size N), derived from classical k-nearest neighbour regression, like • when m = 1, 2, 3, kN ≈ N(p+4)/(p+8) and rate N − 4 p+8 • when m = 4, kN ≈ N(p+4)/(p+8) and rate N − 4 p+8 log N • when m > 4, kN ≈ N(p+4)/(m+p+4) and rate N − 4 m+p+4 [Biau et al., 2013] where p dimension of θ and m dimension of summary statistics Drag: Only applies to sufficient summary statistics
  • 28. Probit modelling on Pima Indian women Example (R benchmark) 200 Pima Indian women with observed variables • plasma glucose concentration in oral glucose tolerance test • diastolic blood pressure • diabetes pedigree function • presence/absence of diabetes
  • 29. Probit modelling on Pima Indian women Example (R benchmark) 200 Pima Indian women with observed variables • plasma glucose concentration in oral glucose tolerance test • diastolic blood pressure • diabetes pedigree function • presence/absence of diabetes Probability of diabetes function of above variables P(y = 1|x) = Φ(x1β1 + x2β2 + x3β3) , for 200 observations based on a g-prior modelling: β ∼ N3(0, n XT X)−1
  • 30. Probit modelling on Pima Indian women Example (R benchmark) 200 Pima Indian women with observed variables • plasma glucose concentration in oral glucose tolerance test • diastolic blood pressure • diabetes pedigree function • presence/absence of diabetes Probability of diabetes function of above variables P(y = 1|x) = Φ(x1β1 + x2β2 + x3β3) , for 200 observations based on a g-prior modelling: β ∼ N3(0, n XT X)−1 Use of MLE estimates as summary statistics and of distance ρ(y, z) = ||ˆβ(z) − ˆβ(y)||2
  • 31. Pima Indian benchmark −0.005 0.010 0.020 0.030 020406080100 Density −0.05 −0.03 −0.01 020406080 Density −1.0 0.0 1.0 2.0 0.00.20.40.60.81.0 Density Figure: Comparison between density estimates of the marginals on β1 (left), β2 (center) and β3 (right) from ABC rejection samples (red) and MCMC samples (black) .
  • 32. MA example Case of the MA(2) model xt = t + 2 i=1 ϑi t−i Simple prior: uniform prior over the identifiability zone, e.g. triangle for MA(2)
  • 33. MA example (2) ABC algorithm thus made of 1 picking a new value (ϑ1, ϑ2) in the triangle 2 generating an iid sequence ( t)−2<t≤T 3 producing a simulated series (xt)1≤t≤T
  • 34. MA example (2) ABC algorithm thus made of 1 picking a new value (ϑ1, ϑ2) in the triangle 2 generating an iid sequence ( t)−2<t≤T 3 producing a simulated series (xt)1≤t≤T Distance: choice between basic distance between the series ρ((xt)1≤t≤T , (xt)1≤t≤T ) = T t=1 (xt − xt)2 or distance between summary statistics like the 2 autocorrelations τj = T t=j+1 xtxt−j
  • 35. Comparison of distance impact Evaluation of the tolerance on the ABC sample against both distances ( = 10%, 1%, 0.1% quantiles of simulated distances) for an MA(2) model
  • 36. Comparison of distance impact 0.0 0.2 0.4 0.6 0.8 01234 θ1 −2.0 −1.0 0.0 0.5 1.0 1.5 0.00.51.01.5 θ2 Evaluation of the tolerance on the ABC sample against both distances ( = 10%, 1%, 0.1% quantiles of simulated distances) for an MA(2) model
  • 37. Comparison of distance impact 0.0 0.2 0.4 0.6 0.8 01234 θ1 −2.0 −1.0 0.0 0.5 1.0 1.5 0.00.51.01.5 θ2 Evaluation of the tolerance on the ABC sample against both distances ( = 10%, 1%, 0.1% quantiles of simulated distances) for an MA(2) model
  • 38. Homonomy The ABC algorithm is not to be confused with the ABC algorithm The Artificial Bee Colony algorithm is a swarm based meta-heuristic algorithm that was introduced by Karaboga in 2005 for optimizing numerical problems. It was inspired by the intelligent foraging behavior of honey bees. The algorithm is specifically based on the model proposed by Tereshko and Loengarov (2005) for the foraging behaviour of honey bee colonies. The model consists of three essential components: employed and unemployed foraging bees, and food sources. The first two components, employed and unemployed foraging bees, search for rich food sources (...) close to their hive. The model also defines two leading modes of behaviour (...): recruitment of foragers to rich food sources resulting in positive feedback and abandonment of poor sources by foragers causing negative feedback. [Karaboga, Scholarpedia]
  • 39. ABC advances Simulating from the prior is often poor in efficiency:
  • 40. ABC advances Simulating from the prior is often poor in efficiency: Either modify the proposal distribution on θ to increase the density of z’s within the vicinity of y... [Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]
  • 41. ABC advances Simulating from the prior is often poor in efficiency: Either modify the proposal distribution on θ to increase the density of z’s within the vicinity of y... [Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007] ...or by viewing the problem as a conditional density estimation and by developing techniques to allow for larger [Beaumont et al., 2002]
  • 42. ABC advances Simulating from the prior is often poor in efficiency: Either modify the proposal distribution on θ to increase the density of z’s within the vicinity of y... [Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007] ...or by viewing the problem as a conditional density estimation and by developing techniques to allow for larger [Beaumont et al., 2002] .....or even by including in the inferential framework [ABCµ] [Ratmann et al., 2009]
  • 43. ABC-NP Better usage of [prior] simulations by adjustement: instead of throwing away θ such that ρ(η(z), η(y)) > , replace θ’s with locally regressed transforms θ∗ = θ − {η(z) − η(y)}T ˆβ [Csill´ery et al., TEE, 2010] where ˆβ is obtained by [NP] weighted least square regression on (η(z) − η(y)) with weights Kδ {ρ(η(z), η(y))} [Beaumont et al., 2002, Genetics]
  • 44. ABC-NP (regression) Also found in the subsequent literature, e.g. in Fearnhead-Prangle (2012) : weight directly simulation by Kδ {ρ(η(z(θ)), η(y))} or 1 S S s=1 Kδ {ρ(η(zs (θ)), η(y))} [consistent estimate of f (η|θ)]
  • 45. ABC-NP (regression) Also found in the subsequent literature, e.g. in Fearnhead-Prangle (2012) : weight directly simulation by Kδ {ρ(η(z(θ)), η(y))} or 1 S S s=1 Kδ {ρ(η(zs (θ)), η(y))} [consistent estimate of f (η|θ)] Curse of dimensionality: poor estimate when d = dim(η) is large...
  • 46. ABC-NP (regression) Use of the kernel weights Kδ {ρ(η(z(θ)), η(y))} leads to the NP estimate of the posterior expectation i θi Kδ {ρ(η(z(θi )), η(y))} i Kδ {ρ(η(z(θi )), η(y))} [Blum, JASA, 2010]
  • 47. ABC-NP (regression) Use of the kernel weights Kδ {ρ(η(z(θ)), η(y))} leads to the NP estimate of the posterior conditional density i ˜Kb(θi − θ)Kδ {ρ(η(z(θi )), η(y))} i Kδ {ρ(η(z(θi )), η(y))} [Blum, JASA, 2010]
  • 48. ABC-NP (regression) ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● 180 190 200 210 220 1.71.81.92.02.12.22.3 ABC and regression adjustment S theta ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● −ε +ε s_obs ● ● ● ●
  • 49. ABC-NP (density estimations) Other versions incorporating regression adjustments i ˜Kb(θ∗ i − θ)Kδ {ρ(η(z(θi )), η(y))} i Kδ {ρ(η(z(θi )), η(y))}
  • 50. ABC-NP (density estimations) Other versions incorporating regression adjustments i ˜Kb(θ∗ i − θ)Kδ {ρ(η(z(θi )), η(y))} i Kδ {ρ(η(z(θi )), η(y))} In all cases, error E[ˆg(θ|y)] − g(θ|y) = cb2 + cδ2 + OP(b2 + δ2 ) + OP(1/nδd ) var(ˆg(θ|y)) = c nbδd (1 + oP(1)) [Blum, JASA, 2010]
  • 51. ABC-NP (density estimations) Other versions incorporating regression adjustments i ˜Kb(θ∗ i − θ)Kδ {ρ(η(z(θi )), η(y))} i Kδ {ρ(η(z(θi )), η(y))} In all cases, error E[ˆg(θ|y)] − g(θ|y) = cb2 + cδ2 + OP(b2 + δ2 ) + OP(1/nδd ) var(ˆg(θ|y)) = c nbδd (1 + oP(1)) [standard NP calculations]
  • 52. ABC-NCH Incorporating non-linearities and heterocedasticities: θ∗ = ˆm(η(y)) + [θ − ˆm(η(z))] ˆσ(η(y)) ˆσ(η(z))
  • 53. ABC-NCH Incorporating non-linearities and heterocedasticities: θ∗ = ˆm(η(y)) + [θ − ˆm(η(z))] ˆσ(η(y)) ˆσ(η(z)) where • ˆm(η) estimated by non-linear regression (e.g., neural network) • ˆσ(η) estimated by non-linear regression on residuals log{θi − ˆm(ηi )}2 = log σ2 (ηi ) + ξi [Blum & Fran¸cois, 2009]
  • 55. ABC-NCH (2) Why neural network? • fights curse of dimensionality • selects relevant summary statistics • provides automated dimension reduction • offers a model choice capability • improves upon multinomial logistic [Blum & Fran¸cois, 2009]
  • 56. ABC-MCMC Markov chain (θ(t)) created via the transition function θ(t+1) =    θ ∼ Kω(θ |θ(t)) if x ∼ f (x|θ ) is such that x = y and u ∼ U(0, 1) ≤ π(θ )Kω(θ(t)|θ ) π(θ(t))Kω(θ |θ(t)) , θ(t) otherwise,
  • 57. ABC-MCMC Markov chain (θ(t)) created via the transition function θ(t+1) =    θ ∼ Kω(θ |θ(t)) if x ∼ f (x|θ ) is such that x = y and u ∼ U(0, 1) ≤ π(θ )Kω(θ(t)|θ ) π(θ(t))Kω(θ |θ(t)) , θ(t) otherwise, has the posterior π(θ|y) as stationary distribution [Marjoram et al, 2003]
  • 58. ABC-MCMC (2) Algorithm 2 Likelihood-free MCMC sampler Use Algorithm 1 to get (θ(0), z(0)) for t = 1 to N do Generate θ from Kω ·|θ(t−1) , Generate z from the likelihood f (·|θ ), Generate u from U[0,1], if u ≤ π(θ )Kω(θ(t−1)|θ ) π(θ(t−1)Kω(θ |θ(t−1)) IA ,y (z ) then set (θ(t), z(t)) = (θ , z ) else (θ(t), z(t))) = (θ(t−1), z(t−1)), end if end for
  • 59. Why does it work? Acceptance probability does not involve calculating the likelihood and π (θ , z |y) π (θ(t−1) , z(t−1)|y) × q(θ(t−1) |θ )f (z(t−1)|θ(t−1) ) q(θ |θ(t−1) )f (z |θ ) = π(θ ) XXXXf (z |θ ) IA ,y (z ) π(θ(t−1) ) f (z(t−1)|θ(t−1) ) IA ,y (z(t−1)) × q(θ(t−1) |θ ) f (z(t−1)|θ(t−1) ) q(θ |θ(t−1) ) XXXXf (z |θ )
  • 60. Why does it work? Acceptance probability does not involve calculating the likelihood and π (θ , z |y) π (θ(t−1) , z(t−1)|y) × q(θ(t−1) |θ )f (z(t−1)|θ(t−1) ) q(θ |θ(t−1) )f (z |θ ) = π(θ ) XXXXf (z |θ ) IA ,y (z ) π(θ(t−1) )((((((((hhhhhhhhf (z(t−1)|θ(t−1) ) IA ,y (z(t−1)) × q(θ(t−1) |θ ) ((((((((hhhhhhhhf (z(t−1)|θ(t−1) ) q(θ |θ(t−1) ) XXXXf (z |θ )
  • 61. Why does it work? Acceptance probability does not involve calculating the likelihood and π (θ , z |y) π (θ(t−1) , z(t−1)|y) × q(θ(t−1) |θ )f (z(t−1)|θ(t−1) ) q(θ |θ(t−1) )f (z |θ ) = π(θ ) XXXXf (z |θ ) IA ,y (z ) π(θ(t−1) )((((((((hhhhhhhhf (z(t−1)|θ(t−1) )XXXXXXIA ,y (z(t−1)) × q(θ(t−1) |θ ) ((((((((hhhhhhhhf (z(t−1)|θ(t−1) ) q(θ |θ(t−1) ) XXXXf (z |θ ) = π(θ )q(θ(t−1) |θ ) π(θ(t−1) q(θ |θ(t−1) ) IA ,y (z )
  • 62. A toy example Case of x ∼ 1 2 N(θ, 1) + 1 2 N(−θ, 1) under prior θ ∼ N(0, 10)
  • 63. A toy example Case of x ∼ 1 2 N(θ, 1) + 1 2 N(−θ, 1) under prior θ ∼ N(0, 10) ABC sampler thetas=rnorm(N,sd=10) zed=sample(c(1,-1),N,rep=TRUE)*thetas+rnorm(N,sd=1) eps=quantile(abs(zed-x),.01) abc=thetas[abs(zed-x)eps]
  • 64. A toy example Case of x ∼ 1 2 N(θ, 1) + 1 2 N(−θ, 1) under prior θ ∼ N(0, 10) ABC-MCMC sampler metas=rep(0,N) metas[1]=rnorm(1,sd=10) zed[1]=x for (t in 2:N){ metas[t]=rnorm(1,mean=metas[t-1],sd=5) zed[t]=rnorm(1,mean=(1-2*(runif(1).5))*metas[t],sd=1) if ((abs(zed[t]-x)eps)||(runif(1)dnorm(metas[t],sd=10)/dnorm(metas[t-1],sd=10))){ metas[t]=metas[t-1] zed[t]=zed[t-1]} }
  • 66. A toy example x = 2 θ −4 −2 0 2 4 0.000.050.100.150.200.250.30
  • 68. A toy example x = 10 θ −10 −5 0 5 10 0.000.050.100.150.200.25
  • 70. A toy example x = 50 θ −40 −20 0 20 40 0.000.020.040.060.080.10
  • 71. A PMC version Use of the same kernel idea as ABC-PRC (Sisson et al., 2007) but with IS correction Generate a sample at iteration t by ˆπt(θ(t) ) ∝ N j=1 ω (t−1) j Kt(θ(t) |θ (t−1) j ) modulo acceptance of the associated xt, and use an importance weight associated with an accepted simulation θ (t) i ω (t) i ∝ π(θ (t) i ) ˆπt(θ (t) i ) . c Still likelihood free [Beaumont et al., 2009]
  • 72. ABC-PMC algorithm Given a decreasing sequence of approximation levels 1 ≥ . . . ≥ T , 1. At iteration t = 1, For i = 1, ..., N Simulate θ (1) i ∼ π(θ) and x ∼ f (x|θ (1) i ) until (x, y) 1 Set ω (1) i = 1/N Take τ2 as twice the empirical variance of the θ (1) i ’s 2. At iteration 2 ≤ t ≤ T, For i = 1, ..., N, repeat Pick θi from the θ (t−1) j ’s with probabilities ω (t−1) j generate θ (t) i |θi ∼ N(θi , σ2 t ) and x ∼ f (x|θ (t) i ) until (x, y) t Set ω (t) i ∝ π(θ (t) i )/ N j=1 ω (t−1) j ϕ σ−1 t θ (t) i − θ (t−1) j ) Take τ2 t+1 as twice the weighted empirical variance of the θ (t) i ’s
  • 73. Sequential Monte Carlo SMC is a simulation technique to approximate a sequence of related probability distributions πn with π0 “easy” and πT as target. Iterated IS as PMC: particles moved from time n to time n via kernel Kn and use of a sequence of extended targets ˜πn ˜πn(z0:n) = πn(zn) n j=0 Lj (zj+1, zj ) where the Lj ’s are backward Markov kernels [check that πn(zn) is a marginal] [Del Moral, Doucet Jasra, Series B, 2006]
  • 74. Sequential Monte Carlo (2) Algorithm 3 SMC sampler sample z (0) i ∼ γ0(x) (i = 1, . . . , N) compute weights w (0) i = π0(z (0) i )/γ0(z (0) i ) for t = 1 to N do if ESS(w(t−1)) NT then resample N particles z(t−1) and set weights to 1 end if generate z (t−1) i ∼ Kt(z (t−1) i , ·) and set weights to w (t) i = w (t−1) i−1 πt(z (t) i ))Lt−1(z (t) i ), z (t−1) i )) πt−1(z (t−1) i ))Kt(z (t−1) i ), z (t) i )) end for [Del Moral, Doucet Jasra, Series B, 2006]
  • 75. ABC-SMC [Del Moral, Doucet Jasra, 2009] True derivation of an SMC-ABC algorithm Use of a kernel Kn associated with target π n and derivation of the backward kernel Ln−1(z, z ) = π n (z )Kn(z , z) πn(z) Update of the weights win ∝ wi(n−1) M m=1 IA n (xm in ) M m=1 IA n−1 (xm i(n−1)) when xm in ∼ K(xi(n−1), ·)
  • 76. ABC-SMCM Modification: Makes M repeated simulations of the pseudo-data z given the parameter, rather than using a single [M = 1] simulation, leading to weight that is proportional to the number of accepted zi s ω(θ) = 1 M M i=1 Iρ(η(y),η(zi )) [limit in M means exact simulation from (tempered) target]
  • 77. Properties of ABC-SMC The ABC-SMC method properly uses a backward kernel L(z, z ) to simplify the importance weight and to remove the dependence on the unknown likelihood from this weight. Update of importance weights is reduced to the ratio of the proportions of surviving particles Major assumption: the forward kernel K is supposed to be invariant against the true target [tempered version of the true posterior]
  • 78. Properties of ABC-SMC The ABC-SMC method properly uses a backward kernel L(z, z ) to simplify the importance weight and to remove the dependence on the unknown likelihood from this weight. Update of importance weights is reduced to the ratio of the proportions of surviving particles Major assumption: the forward kernel K is supposed to be invariant against the true target [tempered version of the true posterior] Adaptivity in ABC-SMC algorithm only found in on-line construction of the thresholds t, slowly enough to keep a large number of accepted transitions
  • 79. A mixture example (2) Recovery of the target, whether using a fixed standard deviation of τ = 0.15 or τ = 1/0.15, or a sequence of adaptive τt’s. θθ −3 −2 −1 0 1 2 3 0.00.20.40.60.81.0 θθ −3 −2 −1 0 1 2 3 0.00.20.40.60.81.0 θθ −3 −2 −1 0 1 2 3 0.00.20.40.60.81.0 θθ −3 −2 −1 0 1 2 3 0.00.20.40.60.81.0 θθ −3 −2 −1 0 1 2 3 0.00.20.40.60.81.0
  • 80. ABC inference machine 1 Approximate Bayesian computation 2 ABC as an inference machine Exact ABC ABCµ Automated summary statistic selection 3 ABC for model choice 4 Genetics of ABC
  • 81. How Bayesian is aBc..? • may be a convergent method of inference (meaningful? sufficient? foreign?) • approximation error unknown (w/o massive simulation) • pragmatic/empirical B (there is no other solution!) • many calibration issues (tolerance, distance, statistics) • the NP side should be incorporated into the whole B picture • the approximation error should also be part of the B inference
  • 82. Wilkinson’s exact (A)BC ABC approximation error (i.e. non-zero tolerance) replaced with exact simulation from a controlled approximation to the target, convolution of true posterior with kernel function π (θ, z|y) = π(θ)f (z|θ)K (y − z) π(θ)f (z|θ)K (y − z)dzdθ , with K kernel parameterised by bandwidth . [Wilkinson, 2013]
  • 83. Wilkinson’s exact (A)BC ABC approximation error (i.e. non-zero tolerance) replaced with exact simulation from a controlled approximation to the target, convolution of true posterior with kernel function π (θ, z|y) = π(θ)f (z|θ)K (y − z) π(θ)f (z|θ)K (y − z)dzdθ , with K kernel parameterised by bandwidth . [Wilkinson, 2013] Theorem The ABC algorithm based on the assumption of a randomised observation y = ˜y + ξ, ξ ∼ K , and an acceptance probability of K (y − z)/M gives draws from the posterior distribution π(θ|y).
  • 84. How exact a BC? “Using to represent measurement error is straightforward, whereas using to model the model discrepancy is harder to conceptualize and not as commonly used” [Richard Wilkinson, 2013]
  • 85. How exact a BC? Pros • Pseudo-data from true model and observed data from noisy model • Interesting perspective in that outcome is completely controlled • Link with ABCµ and assuming y is observed with a measurement error with density K • Relates to the theory of model approximation [Kennedy O’Hagan, 2001] Cons • Requires K to be bounded by M • True approximation error never assessed • Requires a modification of the standard ABC algorithm
  • 86. Noisy ABC Idea: Modify the data from the start ˜y = y0 + ζ1 with the same scale as ABC [ see Fearnhead-Prangle ] run ABC on ˜y
  • 87. Noisy ABC Idea: Modify the data from the start ˜y = y0 + ζ1 with the same scale as ABC [ see Fearnhead-Prangle ] run ABC on ˜y Then ABC produces an exact simulation from π(θ|˜y) = π(θ|˜y) [Dean et al., 2011; Fearnhead and Prangle, 2012]
  • 88. Consistent noisy ABC • Degrading the data improves the estimation performances: • Noisy ABC-MLE is asymptotically (in n) consistent • under further assumptions, the noisy ABC-MLE is asymptotically normal • increase in variance of order −2 • likely degradation in precision or computing time due to the lack of summary statistic [curse of dimensionality]
  • 89. ABCµ [Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS] Use of a joint density f (θ, |y) ∝ ξ( |y, θ) × πθ(θ) × π ( ) where y is the data, and ξ( |y, θ) is the prior predictive density of ρ(η(z), η(y)) given θ and y when z ∼ f (z|θ)
  • 90. ABCµ [Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS] Use of a joint density f (θ, |y) ∝ ξ( |y, θ) × πθ(θ) × π ( ) where y is the data, and ξ( |y, θ) is the prior predictive density of ρ(η(z), η(y)) given θ and y when z ∼ f (z|θ) Warning! Replacement of ξ( |y, θ) with a non-parametric kernel approximation.
  • 91. ABCµ details Multidimensional distances ρk (k = 1, . . . , K) and errors k = ρk(ηk(z), ηk(y)), with k ∼ ξk( |y, θ) ≈ ˆξk( |y, θ) = 1 Bhk b K[{ k−ρk(ηk(zb), ηk(y))}/hk] then used in replacing ξ( |y, θ) with mink ˆξk( |y, θ)
  • 92. ABCµ details Multidimensional distances ρk (k = 1, . . . , K) and errors k = ρk(ηk(z), ηk(y)), with k ∼ ξk( |y, θ) ≈ ˆξk( |y, θ) = 1 Bhk b K[{ k−ρk(ηk(zb), ηk(y))}/hk] then used in replacing ξ( |y, θ) with mink ˆξk( |y, θ) ABCµ involves acceptance probability π(θ , ) π(θ, ) q(θ , θ)q( , ) q(θ, θ )q( , ) mink ˆξk( |y, θ ) mink ˆξk( |y, θ)
  • 93. ABCµ multiple errors [ c Ratmann et al., PNAS, 2009]
  • 94. ABCµ for model choice [ c Ratmann et al., PNAS, 2009]
  • 95. Semi-automatic ABC Fearnhead and Prangle (2012) study ABC and the selection of the summary statistic in close proximity to Wilkinson’s proposal ABC is then considered from a purely inferential viewpoint and calibrated for estimation purposes Use of a randomised (or ‘noisy’) version of the summary statistics ˜η(y) = η(y) + τ Derivation of a well-calibrated version of ABC, i.e. an algorithm that gives proper predictions for the distribution associated with this randomised summary statistic
  • 96. Summary statistics Main results: • Optimality of the posterior expectation E[θ|y] of the parameter of interest as summary statistics η(y)!
  • 97. Summary statistics Main results: • Optimality of the posterior expectation E[θ|y] of the parameter of interest as summary statistics η(y)! • Use of the standard quadratic loss function (θ − θ0)T A(θ − θ0) . bare summary
  • 98. Details on Fearnhead and Prangle (FP) ABC Use of a summary statistic S(·), an importance proposal g(·), a kernel K(·) ≤ 1 and a bandwidth h 0 such that (θ, ysim) ∼ g(θ)f (ysim|θ) is accepted with probability (hence the bound) K[{S(ysim) − sobs}/h] and the corresponding importance weight defined by π(θ) g(θ) [Fearnhead Prangle, 2012]
  • 99. Average acceptance asymptotics For the average acceptance probability/approximate likelihood p(θ|sobs) = f (ysim|θ) K[{S(ysim) − sobs}/h] dysim , overall acceptance probability p(sobs) = p(θ|sobs) π(θ) dθ = π(sobs)hd + o(hd ) [FP, Lemma 1]
  • 100. Calibration of h “This result gives insight into how S(·) and h affect the Monte Carlo error. To minimize Monte Carlo error, we need hd to be not too small. Thus ideally we want S(·) to be a low dimensional summary of the data that is sufficiently informative about θ that π(θ|sobs) is close, in some sense, to π(θ|yobs)” (FP, p.5) • turns h into an absolute value while it should be context-dependent and user-calibrated • only addresses one term in the approximation error and acceptance probability (“curse of dimensionality”) • h large prevents πABC(θ|sobs) to be close to π(θ|sobs) • d small prevents π(θ|sobs) to be close to π(θ|yobs) (“curse of [dis]information”)
  • 101. Converging ABC Theorem (FP) For noisy ABC, under identifiability assumptions, the expected noisy-ABC log-likelihood, E {log[p(θ|sobs)]} = log[p(θ|S(yobs) + )]π(yobs|θ0)K( )dyobsd , has its maximum at θ = θ0.
  • 102. Converging ABC Corollary For noisy ABC, under regularity constraints on summary statistics, the ABC posterior converges onto a point mass on the true parameter value as m → ∞.
  • 103. Loss motivated statistic Under quadratic loss function, Theorem (FP) (i) The minimal posterior error E[L(θ, ˆθ)|yobs] occurs when ˆθ = E(θ|yobs) (!) (ii) When h → 0, EABC(θ|sobs) converges to E(θ|yobs) (iii) If S(yobs) = E[θ|yobs] then for ˆθ = EABC[θ|sobs] E[L(θ, ˆθ)|yobs] = trace(AΣ) + h2 xT AxK(x)dx + o(h2 ).
  • 104. Optimal summary statistic “We take a different approach, and weaken the requirement for πABC to be a good approximation to π(θ|yobs). We argue for πABC to be a good approximation solely in terms of the accuracy of certain estimates of the parameters.” (FP, p.5) From this result, FP • derive their choice of summary statistic, S(y) = E(θ|y) [almost sufficient] • suggest h = O(N−1/(2+d) ) and h = O(N−1/(4+d) ) as optimal bandwidths for noisy and standard ABC.
  • 105. Optimal summary statistic “We take a different approach, and weaken the requirement for πABC to be a good approximation to π(θ|yobs). We argue for πABC to be a good approximation solely in terms of the accuracy of certain estimates of the parameters.” (FP, p.5) From this result, FP • derive their choice of summary statistic, S(y) = E(θ|y) [wow! EABC[θ|S(yobs)] = E[θ|yobs]] • suggest h = O(N−1/(2+d) ) and h = O(N−1/(4+d) ) as optimal bandwidths for noisy and standard ABC.
  • 106. Caveat Since E(θ|yobs) is most usually unavailable, FP suggest (i) use a pilot run of ABC to determine a region of non-negligible posterior mass; (ii) simulate sets of parameter values and data; (iii) use the simulated sets of parameter values and data to estimate the summary statistic; and (iv) run ABC with this choice of summary statistic.
  • 107. ABC for model choice 1 Approximate Bayesian computation 2 ABC as an inference machine 3 ABC for model choice 4 Genetics of ABC
  • 108. Bayesian model choice Several models M1, M2, . . . are considered simultaneously for a dataset y and the model index M is part of the inference. Use of a prior distribution. π(M = m), plus a prior distribution on the parameter conditional on the value m of the model index, πm(θm) Goal is to derive the posterior distribution of M, challenging computational target when models are complex.
  • 109. Generic ABC for model choice Algorithm 4 Likelihood-free model choice sampler (ABC-MC) for t = 1 to T do repeat Generate m from the prior π(M = m) Generate θm from the prior πm(θm) Generate z from the model fm(z|θm) until ρ{η(z), η(y)} Set m(t) = m and θ(t) = θm end for
  • 110. ABC estimates Posterior probability π(M = m|y) approximated by the frequency of acceptances from model m 1 T T t=1 Im(t)=m . Issues with implementation: • should tolerances be the same for all models? • should summary statistics vary across models (incl. their dimension)? • should the distance measure ρ vary as well?
  • 111. ABC estimates Posterior probability π(M = m|y) approximated by the frequency of acceptances from model m 1 T T t=1 Im(t)=m . Extension to a weighted polychotomous logistic regression estimate of π(M = m|y), with non-parametric kernel weights [Cornuet et al., DIYABC, 2009]
  • 112. The Great ABC controversy On-going controvery in phylogeographic genetics about the validity of using ABC for testing Against: Templeton, 2008, 2009, 2010a, 2010b, 2010c argues that nested hypotheses cannot have higher probabilities than nesting hypotheses (!)
  • 113. The Great ABC controversy On-going controvery in phylogeographic genetics about the validity of using ABC for testing Against: Templeton, 2008, 2009, 2010a, 2010b, 2010c argues that nested hypotheses cannot have higher probabilities than nesting hypotheses (!) Replies: Fagundes et al., 2008, Beaumont et al., 2010, Berger et al., 2010, Csill`ery et al., 2010 point out that the criticisms are addressed at [Bayesian] model-based inference and have nothing to do with ABC...
  • 114. Back to sufficiency If η1(x) sufficient statistic for model m = 1 and parameter θ1 and η2(x) sufficient statistic for model m = 2 and parameter θ2, (η1(x), η2(x)) is not always sufficient for (m, θm)
  • 115. Back to sufficiency If η1(x) sufficient statistic for model m = 1 and parameter θ1 and η2(x) sufficient statistic for model m = 2 and parameter θ2, (η1(x), η2(x)) is not always sufficient for (m, θm) c Potential loss of information at the testing level
  • 116. Limiting behaviour of B12 (T → ∞) ABC approximation B12(y) = T t=1 Imt =1 Iρ{η(zt ),η(y)}≤ T t=1 Imt =2 Iρ{η(zt ),η(y)}≤ , where the (mt, zt)’s are simulated from the (joint) prior
  • 117. Limiting behaviour of B12 (T → ∞) ABC approximation B12(y) = T t=1 Imt =1 Iρ{η(zt ),η(y)}≤ T t=1 Imt =2 Iρ{η(zt ),η(y)}≤ , where the (mt, zt)’s are simulated from the (joint) prior As T go to infinity, limit B12(y) = Iρ{η(z),η(y)}≤ π1(θ1)f1(z|θ1) dz dθ1 Iρ{η(z),η(y)}≤ π2(θ2)f2(z|θ2) dz dθ2 = Iρ{η,η(y)}≤ π1(θ1)f η 1 (η|θ1) dη dθ1 Iρ{η,η(y)}≤ π2(θ2)f η 2 (η|θ2) dη dθ2 , where f η 1 (η|θ1) and f η 2 (η|θ2) distributions of η(z)
  • 118. Limiting behaviour of B12 ( → 0) When goes to zero, Bη 12(y) = π1(θ1)f η 1 (η(y)|θ1) dθ1 π2(θ2)f η 2 (η(y)|θ2) dθ2 ,
  • 119. Limiting behaviour of B12 ( → 0) When goes to zero, Bη 12(y) = π1(θ1)f η 1 (η(y)|θ1) dθ1 π2(θ2)f η 2 (η(y)|θ2) dθ2 , c Bayes factor based on the sole observation of η(y)
  • 120. Limiting behaviour of B12 (under sufficiency) If η(y) sufficient statistic for both models, fi (y|θi ) = gi (y)f η i (η(y)|θi ) Thus B12(y) = Θ1 π(θ1)g1(y)f η 1 (η(y)|θ1) dθ1 Θ2 π(θ2)g2(y)f η 2 (η(y)|θ2) dθ2 = g1(y) π1(θ1)f η 1 (η(y)|θ1) dθ1 g2(y) π2(θ2)f η 2 (η(y)|θ2) dθ2 = g1(y) g2(y) Bη 12(y) . [Didelot, Everitt, Johansen Lawson, 2011]
  • 121. Limiting behaviour of B12 (under sufficiency) If η(y) sufficient statistic for both models, fi (y|θi ) = gi (y)f η i (η(y)|θi ) Thus B12(y) = Θ1 π(θ1)g1(y)f η 1 (η(y)|θ1) dθ1 Θ2 π(θ2)g2(y)f η 2 (η(y)|θ2) dθ2 = g1(y) π1(θ1)f η 1 (η(y)|θ1) dθ1 g2(y) π2(θ2)f η 2 (η(y)|θ2) dθ2 = g1(y) g2(y) Bη 12(y) . [Didelot, Everitt, Johansen Lawson, 2011] c No discrepancy only when cross-model sufficiency
  • 122. MA(q) divergence 1 2 0.00.20.40.60.81.0 1 2 0.00.20.40.60.81.0 1 2 0.00.20.40.60.81.0 1 2 0.00.20.40.60.81.0 Evolution [against ] of ABC Bayes factor, in terms of frequencies of visits to models MA(1) (left) and MA(2) (right) when equal to 10, 1, .1, .01% quantiles on insufficient autocovariance distances. Sample of 50 points from a MA(2) with θ1 = 0.6, θ2 = 0.2. True Bayes factor equal to 17.71.
  • 123. MA(q) divergence 1 2 0.00.20.40.60.81.0 1 2 0.00.20.40.60.81.0 1 2 0.00.20.40.60.81.0 1 2 0.00.20.40.60.81.0 Evolution [against ] of ABC Bayes factor, in terms of frequencies of visits to models MA(1) (left) and MA(2) (right) when equal to 10, 1, .1, .01% quantiles on insufficient autocovariance distances. Sample of 50 points from a MA(1) model with θ1 = 0.6. True Bayes factor B21 equal to .004.
  • 124. Further comments ‘There should be the possibility that for the same model, but different (non-minimal) [summary] statistics (so different η’s: η1 and η∗ 1) the ratio of evidences may no longer be equal to one.’ [Michael Stumpf, Jan. 28, 2011, ’Og] Using different summary statistics [on different models] may indicate the loss of information brought by each set but agreement does not lead to trustworthy approximations.
  • 125. LDA summaries for model choice In parallel to F P semi-automatic ABC, selection of most discriminant subvector out of a collection of summary statistics, can be based on Linear Discriminant Analysis (LDA) [Estoup al., 2012, Mol. Ecol. Res.] Solution now implemented in DIYABC.2 [Cornuet al., 2008, Bioinf.; Estoup al., 2013]
  • 126. Implementation Step 1: Take a subset of the α% (e.g., 1%) best simulations from an ABC reference table usually including 106–109 simulations for each of the M compared scenarios/models Selection based on normalized Euclidian distance computed between observed and simulated raw summaries Step 2: run LDA on this subset to transform summaries into (M − 1) discriminant variables Step 3: Estimation of the posterior probabilities of each competing scenario/model by polychotomous local logistic regression against the M − 1 most discriminant variables [Cornuet al., 2008, Bioinformatics]
  • 127. Implementation Step 1: Take a subset of the α% (e.g., 1%) best simulations from an ABC reference table usually including 106–109 simulations for each of the M compared scenarios/models Step 2: run LDA on this subset to transform summaries into (M − 1) discriminant variables When computing LDA functions, weight simulated data with an Epanechnikov kernel Step 3: Estimation of the posterior probabilities of each competing scenario/model by polychotomous local logistic regression against the M − 1 most discriminant variables [Cornuet al., 2008, Bioinformatics]
  • 128. LDA advantages • much faster computation of scenario probabilities via polychotomous regression • a (much) lower number of explanatory variables improves the accuracy of the ABC approximation, reduces the tolerance and avoids extra costs in constructing the reference table • allows for a large collection of initial summaries • ability to evaluate Type I and Type II errors on more complex models [more on this later] • LDA reduces correlation among explanatory variables
  • 129. LDA advantages • much faster computation of scenario probabilities via polychotomous regression • a (much) lower number of explanatory variables improves the accuracy of the ABC approximation, reduces the tolerance and avoids extra costs in constructing the reference table • allows for a large collection of initial summaries • ability to evaluate Type I and Type II errors on more complex models [more on this later] • LDA reduces correlation among explanatory variables When available, using both simulated and real data sets, posterior probabilities of scenarios computed from LDA-transformed and raw summaries are strongly correlated
  • 130. A stylised problem Central question to the validation of ABC for model choice: When is a Bayes factor based on an insufficient statistic T(y) consistent?
  • 131. A stylised problem Central question to the validation of ABC for model choice: When is a Bayes factor based on an insufficient statistic T(y) consistent? Note/warnin: c drawn on T(y) through BT 12(y) necessarily differs from c drawn on y through B12(y) [Marin, Pillai, X, Rousseau, JRSS B, 2013]
  • 132. A benchmark if toy example Comparison suggested by referee of PNAS paper [thanks!]: [X, Cornuet, Marin, Pillai, Aug. 2011] Model M1: y ∼ N(θ1, 1) opposed to model M2: y ∼ L(θ2, 1/ √ 2), Laplace distribution with mean θ2 and scale parameter 1/ √ 2 (variance one). Four possible statistics 1 sample mean y (sufficient for M1 if not M2); 2 sample median med(y) (insufficient); 3 sample variance var(y) (ancillary); 4 median absolute deviation mad(y) = med(|y − med(y)|);
  • 133. A benchmark if toy example Comparison suggested by referee of PNAS paper [thanks!]: [X, Cornuet, Marin, Pillai, Aug. 2011] Model M1: y ∼ N(θ1, 1) opposed to model M2: y ∼ L(θ2, 1/ √ 2), Laplace distribution with mean θ2 and scale parameter 1/ √ 2 (variance one). q q q q q q q q q q q Gauss Laplace 0.00.10.20.30.40.50.60.7 n=100 q q q q q q q q q q q q q q q q q q Gauss Laplace 0.00.20.40.60.81.0 n=100
  • 134. Framework Starting from sample y = (y1, . . . , yn) the observed sample, not necessarily iid with true distribution y ∼ Pn Summary statistics T(y) = Tn = (T1(y), T2(y), · · · , Td (y)) ∈ Rd with true distribution Tn ∼ Gn.
  • 135. Framework c Comparison of – under M1, y ∼ F1,n(·|θ1) where θ1 ∈ Θ1 ⊂ Rp1 – under M2, y ∼ F2,n(·|θ2) where θ2 ∈ Θ2 ⊂ Rp2 turned into – under M1, T(y) ∼ G1,n(·|θ1), and θ1|T(y) ∼ π1(·|Tn ) – under M2, T(y) ∼ G2,n(·|θ2), and θ2|T(y) ∼ π2(·|Tn )
  • 136. Assumptions A collection of asymptotic “standard” assumptions: [A1] is a standard central limit theorem under the true model with asymptotic mean µ0 [A2] controls the large deviations of the estimator Tn from the model mean µ(θ) [A3] is the standard prior mass condition found in Bayesian asymptotics (di effective dimension of the parameter) [A4] restricts the behaviour of the model density against the true density [Think CLT!]
  • 137. Asymptotic marginals Asymptotically, under [A1]–[A4] mi (t) = Θi gi (t|θi ) πi (θi ) dθi is such that (i) if inf{|µi (θi ) − µ0|; θi ∈ Θi } = 0, Cl vd−di n ≤ mi (Tn ) ≤ Cuvd−di n and (ii) if inf{|µi (θi ) − µ0|; θi ∈ Θi } 0 mi (Tn ) = oPn [vd−τi n + vd−αi n ].
  • 138. Between-model consistency Consequence of above is that asymptotic behaviour of the Bayes factor is driven by the asymptotic mean value µ(θ) of Tn under both models. And only by this mean value!
  • 139. Between-model consistency Consequence of above is that asymptotic behaviour of the Bayes factor is driven by the asymptotic mean value µ(θ) of Tn under both models. And only by this mean value! Indeed, if inf{|µ0 − µ2(θ2)|; θ2 ∈ Θ2} = inf{|µ0 − µ1(θ1)|; θ1 ∈ Θ1} = 0 then Cl v −(d1−d2) n ≤ m1(Tn ) m2(Tn ) ≤ Cuv −(d1−d2) n , where Cl , Cu = OPn (1), irrespective of the true model. c Only depends on the difference d1 − d2: no consistency
  • 140. Between-model consistency Consequence of above is that asymptotic behaviour of the Bayes factor is driven by the asymptotic mean value µ(θ) of Tn under both models. And only by this mean value! Else, if inf{|µ0 − µ2(θ2)|; θ2 ∈ Θ2} inf{|µ0 − µ1(θ1)|; θ1 ∈ Θ1} = 0 then m1(Tn ) m2(Tn ) ≥ Cu min v −(d1−α2) n , v −(d1−τ2) n
  • 141. Checking for adequate statistics Run a practical check of the relevance (or non-relevance) of Tn null hypothesis that both models are compatible with the statistic Tn H0 : inf{|µ2(θ2) − µ0|; θ2 ∈ Θ2} = 0 against H1 : inf{|µ2(θ2) − µ0|; θ2 ∈ Θ2} 0 testing procedure provides estimates of mean of Tn under each model and checks for equality
  • 142. Checking in practice • Under each model Mi , generate ABC sample θi,l , l = 1, · · · , L • For each θi,l , generate yi,l ∼ Fi,n(·|ψi,l ), derive Tn (yi,l ) and compute ˆµi = 1 L L l=1 Tn (yi,l ), i = 1, 2 . • Conditionally on Tn (y), √ L { ˆµi − Eπ [µi (θi )|Tn (y)]} N(0, Vi ), • Test for a common mean H0 : ˆµ1 ∼ N(µ0, V1) , ˆµ2 ∼ N(µ0, V2) against the alternative of different means H1 : ˆµi ∼ N(µi , Vi ), with µ1 = µ2 .
  • 143. Toy example: Laplace versus Gauss qqqqqqqqqqqqqqq qqqqqqqqqq q qq q q Gauss Laplace Gauss Laplace 010203040 Normalised χ2 without and with mad
  • 144. Genetics of ABC 1 Approximate Bayesian computation 2 ABC as an inference machine 3 ABC for model choice 4 Genetics of ABC
  • 145. Genetic background of ABC ABC is a recent computational technique that only requires being able to sample from the likelihood f (·|θ) This technique stemmed from population genetics models, about 15 years ago, and population geneticists still contribute significantly to methodological developments of ABC. [Griffith al., 1997; Tavar´e al., 1999]
  • 146. Population genetics [Part derived from the teaching material of Raphael Leblois, ENS Lyon, November 2010] • Describe the genotypes, estimate the alleles frequencies, determine their distribution among individuals, populations and between populations; • Predict and understand the evolution of gene frequencies in populations as a result of various factors. c Analyses the effect of various evolutive forces (mutation, drift, migration, selection) on the evolution of gene frequencies in time and space.
  • 147. Wright-Fisher model Le modèle de Wright-Fisher •! En l’absence de mutation et de sélection, les fréquences alléliques dérivent (augmentent et diminuent) inévitablement jusqu’à la fixation d’un allèle •! La dérive conduit donc à la perte de variation génétique à l’intérieur des populations • A population of constant size, in which individuals reproduce at the same time. • Each gene in a generation is a copy of a gene of the previous generation. • In the absence of mutation and selection, allele frequencies derive inevitably until the fixation of an allele.
  • 148. Coalescent theory [Kingman, 1982; Tajima, Tavar´e, tc] !#$%'((')**+$,-'.'/010234%'.'5*$*%()23$156 !!7**+$,-',()5534%' !7**+$,-'8,$)('5,'1,'9 :;;=7?@# :ABC7#?@# Coalescence theory interested in the genealogy of a sample of genes back in time to the common ancestor of the sample.
  • 149. Common ancestor 6 Timeofcoalescence (T) Modélisation du processus de dérive génétique en “remontant dans le temps” jusqu’à l’ancêtre commun d’un échantillon de gènes Les différentes lignées fusionnent (coalescent) au fur et à mesure que l’on remonte vers le passé The different lineages merge when we go back in the past.
  • 150. Neutral mutations 20 Sous l’hypothèse de neutralité des marqueurs génétiques étudiés, les mutations sont indépendantes de la généalogie i.e. la généalogie ne dépend que des processus démographiques On construit donc la généalogie selon les paramètres démographiques (ex. N), puis on ajoute a posteriori les mutations sur les différentes branches, du MRCA au feuilles de l’arbre On obtient ainsi des données de polymorphisme sous les modèles démographiques et mutationnels considérés • Under the assumption of neutrality, the mutations are independent of the genealogy. • We construct the genealogy according to the demographic parameters, then we add a posteriori the mutations.
  • 151. Neutral model at a given microsatellite locus, in a closed panmictic population at equilibrium Kingman’s genealogy When time axis is normalized, T(k) ∼ Exp(k(k −1)/2)
  • 152. Neutral model at a given microsatellite locus, in a closed panmictic population at equilibrium Kingman’s genealogy When time axis is normalized, T(k) ∼ Exp(k(k −1)/2) Mutations according to the Simple stepwise Mutation Model (SMM) • date of the mutations ∼ Poisson process with intensity θ/2 over the branches
  • 153. Neutral model at a given microsatellite locus, in a closed panmictic population at equilibrium Observations: leafs of the tree ˆθ =? Kingman’s genealogy When time axis is normalized, T(k) ∼ Exp(k(k −1)/2) Mutations according to the Simple stepwise Mutation Model (SMM) • date of the mutations ∼ Poisson process with intensity θ/2 over the branches • MRCA = 100 • independent mutations: ±1 with pr. 1/2
  • 154. Much more interesting models. . . • several independent locus Independent gene genealogies and mutations • different populations linked by an evolutionary scenario made of divergences, admixtures, migrations between populations, selection pressure, etc. • larger sample size usually between 50 and 100 genes
  • 155. Available population scenarios Between populations: three types of events, backward in time • the divergence is the fusion between two populations, • the admixture is the split of a population into two parts, • the migration allows the move of some lineages of a population to another. • 4 • 2 • 5 • 3 • 1 Lignée ancestrale Présent T5 T4 T3 T2 FIGURE 2.2: Exemple de généalogie de cinq individus issus d’une seule population fermée à l’équilibre. Les individus échantillonnés sont représentés par les feuilles du dendrogramme, les durées inter-coalescences T2, . . . , T5 sont indépendantes, et Tk est de loi exponentielle de paramètre k k - 1 /2. Pop1 Pop2 Pop1 Divergence (a) t t0 Pop1 Pop3 Pop2 Admixture (b) 1 - rr t t0 m12 m21 Pop1 Pop2 Migration (c) t t0 FIGURE 2.3: Représentations graphiques des trois types d’évènements inter-populationnels d’un scénario démographique. Il existe deux familles d’évènements inter-populationnels. La première famille est simple, elle correspond aux évènement inter-populationnels instantanés. C’est le cas d’une divergence ou d’une admixture. (a) Deux populations qui évoluent pour se fusionner dans le cas d’une divergence. (b) Trois po- pulations qui évoluent en parallèle pour une admixture. Pour cette situation, chacun des tubes représente (on peut imaginer qu’il porte à l’intérieur) la généalogie de la population qui évolue indépendamment des
  • 156. A complex scenario The goal is to discriminate between different population scenarios from a dataset of polymorphism (DNA sample) y observed at the present time. 2.5 Conclusion 37 Divergence Pop1 Ne1 Pop4 Ne4 Admixture Pop3 Ne3 Pop6Ne6 Pop2 Ne2 Pop5Ne5 Migration m m0 t = 0 t5 t4 t0 4 Ne4 Ne0 4 t3 t2 t1 r 1 - r 1 - ss FIGURE 2.1: Exemple d’un scénario évolutif complexe composé d’évènements inter-populationnels. Ce scénario implique quatre populations échantillonnées Pop1, . . . , Pop4 et deux autres populations non- observées Pop5 et Pop6. Les branches de ce schéma sont des tubes et le scénario démographique contraint la généalogie à rester à l’intérieur de ces tubes. La migration entre les populations Pop3 et Pop4 sur la 0
  • 157. Demo-genetic inference Each model is characterized by a set of parameters θ that cover historical (time divergence, admixture time ...), demographics (population sizes, admixture rates, migration rates, ...) and genetic (mutation rate, ...) factors The goal is to estimate these parameters from a dataset of polymorphism (DNA sample) y observed at the present time Problem: most of the time, we can not calculate the likelihood of the polymorphism data f (y|θ).
  • 158. Untractable likelihood Missing (too missing!) data structure: f (y|θ) = G f (y|G, θ)f (G|θ)dG The genealogies are considered as nuisance parameters. This problematic thus differs from the phylogenetic approach where the tree is the parameter of interesst.
  • 159. A genuine example of application 94 !#$%'()*+,(-*.(/+0$'1)()$/+2!,03! 1/+*%*'4*+56(47()$/.+.1#+4*.+8-9':*.+ Pygmies populations: do they have a common origin? Is there a lot of exchanges between pygmies and non-pygmies populations?
  • 161. Simulation results Différents scénarios possibles, choix de scenari Le scenario 1a est largement soutenu par rap autres ! plaide pour une origine commune !#$%'()*+,(-*.(/+0$'1)()$/+2!,03 1/+*%*'4*+56(47()$/.+.1#+4*.+8-9':*. Différents scénarios possibles, choix de scenario par ABC Le scenario 1a est largement soutenu par rapport aux autres ! plaide pour une origine commune des populations pygmées d’Afrique de l’Ouest Verdu e c Scenario 1A is chosen.
  • 162. Most likely scenario 99 !#$%'()*+,(-*.(/+0$'1)()$/+2!,03! 1/+*%*'4*+56(47()$/.+.1#+4*.+8-9':*.+ Scénario évolutif : on « raconte » une histoire à partir de ces inférences Verdu et al. 2009
  • 163. Instance of ecological questions [message in a beetle] • How the Asian Ladybird beetle arrived in Europe? • Why does they swarm right now? • What are the routes of invasion? • How to get rid of them? • Why did the chicken cross the road? [Lombaert al., 2010, PLoS ONE] beetles in forests
  • 164. Worldwide invasion routes of Harmonia Axyridis For each outbreak, the arrow indicates the most likely invasion pathway and the associated posterior probability, with 95% credible intervals in brackets [Estoup et al., 2012, Molecular Ecology Res.]
  • 165. Worldwide invasion routes of Harmonia Axyridis For each outbreak, the arrow indicates the most likely invasion pathway and the associated posterior probability, with 95% credible intervals in brackets [Estoup et al., 2012, Molecular Ecology Res.]
  • 166. A population genetic illustration of ABC model choice Two populations (1 and 2) having diverged at a fixed known time in the past and third population (3) which diverged from one of those two populations (models 1 and 2, respectively). Observation of 50 diploid individuals/population genotyped at 5, 50 or 100 independent microsatellite loci. Model 2
  • 167. A population genetic illustration of ABC model choice Two populations (1 and 2) having diverged at a fixed known time in the past and third population (3) which diverged from one of those two populations (models 1 and 2, respectively). Observation of 50 diploid individuals/population genotyped at 5, 50 or 100 independent microsatellite loci. Stepwise mutation model: the number of repeats of the mutated gene increases or decreases by one. Mutation rate µ common to all loci set to 0.005 (single parameter) with uniform prior distribution µ ∼ U[0.0001, 0.01]
  • 168. A population genetic illustration of ABC model choice Summary statistics associated to the (δµ)2 distance xl,i,j repeated number of allele in locus l = 1, . . . , L for individual i = 1, . . . , 100 within the population j = 1, 2, 3. Then (δµ)2 j1,j2 = 1 L L l=1   1 100 100 i1=1 xl,i1,j1 − 1 100 100 i2=1 xl,i2,j2   2 .
  • 169. A population genetic illustration of ABC model choice For two copies of locus l with allele sizes xl,i,j1 and xl,i ,j2 , most recent common ancestor at coalescence time τj1,j2 , gene genealogy distance of 2τj1,j2 , hence number of mutations Poisson with parameter 2µτj1,j2 . Therefore, E xl,i,j1 − xl,i ,j2 2 |τj1,j2 = 2µτj1,j2 and Model 1 Model 2 E (δµ)2 1,2 2µ1t 2µ2t E (δµ)2 1,3 2µ1t 2µ2t E (δµ)2 2,3 2µ1t 2µ2t
  • 170. A population genetic illustration of ABC model choice Thus, • Bayes factor based only on distance (δµ)2 1,2 not convergent: if µ1 = µ2, same expectation • Bayes factor based only on distance (δµ)2 1,3 or (δµ)2 2,3 not convergent: if µ1 = 2µ2 or 2µ1 = µ2 same expectation • if two of the three distances are used, Bayes factor converges: there is no (µ1, µ2) for which all expectations are equal
  • 171. A population genetic illustration of ABC model choice q q q 5 50 100 0.00.40.8 DM2(12) q q q q q q q q q qq q q q 5 50 100 0.00.40.8 DM2(13) q q q q q q q q q q q q q q q q q q q qqqqq q qq q qqqq q q q q q q 5 50 100 0.00.40.8 DM2(13) DM2(23) Posterior probabilities that the data is from model 1 for 5, 50 and 100 loci
  • 172. A population genetic illustration of ABC model choice qqqqq DM2(12) DM2(13) DM2(23) 020406080100120140 qqqqqqqqq DM2(12) DM2(13) DM2(23) 0.00.20.40.60.81.0 p−values