SlideShare una empresa de Scribd logo
1 de 80
Descargar para leer sin conexión
Marcus Hutter           -1-             Universal Induction & Intelligence



Foundations of Machine Learning

                          Marcus Hutter
                   Canberra, ACT, 0200, Australia
                    http://www.hutter1.net/



                    ANU        RSISE     NICTA

                Machine Learning Summer School
                MLSS-2008, 2 – 15 March, Kioloa
Marcus Hutter               -2-            Universal Induction & Intelligence

                              Overview
 • Setup: Given (non)iid data D = (x1 , ..., xn ), predict xn+1
 • Ultimate goal is to maximize profit or minimize loss
 • Consider Models/Hypothesis Hi ∈ M
 • Max.Likelihood: Hbest = arg maxi p(D|Hi ) (overfits if M large)
 • Bayes: Posterior probability of Hi is p(Hi |D) ∝ p(D|Hi )p(Hi )
 • Bayes needs prior(Hi )
 • Occam+Epicurus: High prior for simple models.
 • Kolmogorov/Solomonoff: Quantification of simplicity/complexity
 • Bayes works if D is sampled from Htrue ∈ M
 • Universal AI = Universal Induction + Sequential Decision Theory
Marcus Hutter              -3-               Universal Induction & Intelligence




                              Abstract
Machine learning is concerned with developing algorithms that learn
from experience, build models of the environment from the acquired
knowledge, and use these models for prediction. Machine learning is
usually taught as a bunch of methods that can solve a bunch of
problems (see my Introduction to SML last week). The following
tutorial takes a step back and asks about the foundations of machine
learning, in particular the (philosophical) problem of inductive inference,
(Bayesian) statistics, and artificial intelligence. The tutorial concentrates
on principled, unified, and exact methods.
Marcus Hutter                -4-              Universal Induction & Intelligence




                        Table of Contents
                • Overview
                • Philosophical Issues
                • Bayesian Sequence Prediction
                • Universal Inductive Inference
                • The Universal Similarity Metric
                • Universal Artificial Intelligence
                • Wrap Up
                • Literature
Marcus Hutter             -5-              Universal Induction & Intelligence




            Philosophical Issues: Contents
                • Philosophical Problems
                • On the Foundations of Machine Learning
                • Example 1: Probability of Sunrise Tomorrow
                • Example 2: Digits of a Computable Number
                • Example 3: Number Sequences
                • Occam’s Razor to the Rescue
                • Grue Emerald and Confirmation Paradoxes
                • What this Tutorial is (Not) About
                • Sequential/Online Prediction – Setup
Marcus Hutter              -6-               Universal Induction & Intelligence

                Philosophical Issues: Abstract
I start by considering the philosophical problems concerning machine
learning in general and induction in particular. I illustrate the problems
and their intuitive solution on various (classical) induction examples.
The common principle to their solution is Occam’s simplicity principle.
Based on Occam’s and Epicurus’ principle, Bayesian probability theory,
and Turing’s universal machine, Solomonoff developed a formal theory
of induction. I describe the sequential/online setup considered in this
tutorial and place it into the wider machine learning context.
Marcus Hutter             -7-               Universal Induction & Intelligence




                  Philosophical Problems
         • Does inductive inference work? Why? How?

         • How to choose the model class?

         • How to choose the prior?

         • How to make optimal decisions in unknown environments?

         • What is intelligence?
Marcus Hutter            -8-               Universal Induction & Intelligence

   On the Foundations of Machine Learning
 • Example: Algorithm/complexity theory: The goal is to find fast
   algorithms solving problems and to show lower bounds on their
   computation time. Everything is rigorously defined: algorithm,
   Turing machine, problem classes, computation time, ...
 • Most disciplines start with an informal way of attacking a subject.
   With time they get more and more formalized often to a point
   where they are completely rigorous. Examples: set theory, logical
   reasoning, proof theory, probability theory, infinitesimal calculus,
   energy, temperature, quantum field theory, ...
 • Machine learning: Tries to build and understand systems that learn
   from past data, make good prediction, are able to generalize, act
   intelligently, ... Many terms are only vaguely defined or there are
   many alternate definitions.
Marcus Hutter                -9-              Universal Induction & Intelligence

 Example 1: Probability of Sunrise Tomorrow
What is the probability p(1|1d ) that the sun will rise tomorrow?
(d = past # days sun rose, 1 =sun rises. 0 = sun will not rise)
 • p is undefined, because there has never been an experiment that
   tested the existence of the sun tomorrow (ref. class problem).
 • The p = 1, because the sun rose in all past experiments.
 • p = 1 − , where        is the proportion of stars that explode per day.
         d+1
 • p=    d+2 ,   which is Laplace rule derived from Bayes rule.
 • Derive p from the type, age, size and temperature of the sun, even
   though we never observed another star with those exact properties.
Conclusion: We predict that the sun will rise tomorrow with high
probability independent of the justification.
Marcus Hutter            - 10 -            Universal Induction & Intelligence



 Example 2: Digits of a Computable Number
 • Extend 14159265358979323846264338327950288419716939937?
 • Looks random?!
 • Frequency estimate: n = length of sequence. ki = number of
                                                     i
   occured i =⇒ Probability of next digit being i is n . Asymptotically
   i     1
   n → 10 (seems to be) true.
 • But we have the strong feeling that (i.e. with high probability) the
   next digit will be 5 because the previous digits were the expansion
   of π.
 • Conclusion: We prefer answer 5, since we see more structure in the
   sequence than just random digits.
Marcus Hutter             - 11 -             Universal Induction & Intelligence

            Example 3: Number Sequences
Sequence: x1 , x2 , x3 , x4 , x5 , ...
            1,    2,   3,     4,   ?, ...
  • x5 = 5, since xi = i for i = 1..4.
  • x5 = 29, since xi = i4 − 10i3 + 35i2 − 49i + 24.
Conclusion: We prefer 5, since linear relation involves less arbitrary
parameters than 4th-order polynomial.
Sequence: 2,3,5,7,11,13,17,19,23,29,31,37,41,43,47,53,59,?
  • 61, since this is the next prime
  • 60, since this is the order of the next simple group
Conclusion: We prefer answer 61, since primes are a more familiar
concept than simple groups.
On-Line Encyclopedia of Integer Sequences:
http://www.research.att.com/∼njas/sequences/
Marcus Hutter            - 12 -            Universal Induction & Intelligence

                Occam’s Razor to the Rescue
 • Is there a unique principle which allows us to formally arrive at a
   prediction which
   - coincides (always?) with our intuitive guess -or- even better,
   - which is (in some sense) most likely the best or correct answer?
 • Yes! Occam’s razor: Use the simplest explanation consistent with
   past data (and use it for prediction).
 • Works! For examples presented and for many more.
 • Actually Occam’s razor can serve as a foundation of machine
   learning in general, and is even a fundamental principle (or maybe
   even the mere definition) of science.
 • Problem: Not a formal/mathematical objective principle.
   What is simple for one may be complicated for another.
Marcus Hutter            - 13 -             Universal Induction & Intelligence



                  Grue Emerald Paradox
    Hypothesis 1: All emeralds are green.

    Hypothesis 2: All emeralds found till y2010 are green,
                  thereafter all emeralds are blue.

 • Which hypothesis is more plausible? H1! Justification?

 • Occam’s razor: take simplest hypothesis consistent with data.

    is the most important principle in machine learning and science.
Marcus Hutter            - 14 -           Universal Induction & Intelligence



                  Confirmation Paradox
(i) R → B is confirmed by an R-instance with property B
(ii) ¬B → ¬R is confirmed by a ¬B-instance with property ¬R.
(iii) Since R → B and ¬B → ¬R are logically equivalent,
R → B is also confirmed by a ¬B-instance with property ¬R.

Example: Hypothesis (o): All ravens are black (R=Raven, B=Black).
(i) observing a Black Raven confirms Hypothesis (o).
(iii) observing a White Sock also confirms that all Ravens are Black,
since a White Sock is a non-Raven which is non-Black.

This conclusion sounds absurd.
Marcus Hutter            - 15 -            Universal Induction & Intelligence




                        Problem Setup
 • Induction problems can be phrased as sequence prediction tasks.
 • Classification is a special case of sequence prediction.
   (With some tricks the other direction is also true)
 • This tutorial focusses on maximizing profit (minimizing loss).
   We’re not (primarily) interested in finding a (true/predictive/causal)
   model.
 • Separating noise from data is not necessary in this setting!
Marcus Hutter             - 16 -           Universal Induction & Intelligence

         What This Tutorial is (Not) About
       Dichotomies in Artificial Intelligence & Machine Learning
       scope of my tutorial         ⇔   scope of other tutorials
             (machine) learning     ⇔   (GOFAI) knowledge-based
                      statistical   ⇔   logic-based
      decision ⇔ prediction         ⇔   induction ⇔ action
                   classification    ⇔   regression
            sequential / non-iid    ⇔   independent identically distributed
                 online learning    ⇔   offline/batch learning
              passive prediction    ⇔   active learning
              Bayes ⇔ MDL           ⇔   Expert ⇔ Frequentist
        uninformed / universal      ⇔   informed / problem-specific
conceptual/mathematical issues      ⇔   computational issues
                exact/principled    ⇔   heuristic
             supervised learning    ⇔   unsupervised ⇔ RL learning
                    exploitation    ⇔   exploration
Marcus Hutter                - 17 -                 Universal Induction & Intelligence

       Sequential/Online Prediction – Setup
In sequential or online prediction, for times t = 1, 2, 3, ...,
                                    p
our predictor p makes a prediction yt ∈ Y
based on past observations x1 , ..., xt−1 .
                                                      p
Thereafter xt ∈ X is observed and p suffers Loss(xt , yt ).
The goal is to design predictors with small total loss or cumulative
                 T             p
Loss1:T (p) := t=1 Loss(xt , yt ).
Applications are abundant, e.g. weather or stock market forecasting.
                Loss(x, y)            X = {sunny , rainy}
Example:             umbrella                 0.1       0.3
            Y=      sunglasses                0.0       1.0

Setup also includes: Classification and Regression problems.
Marcus Hutter           - 18 -             Universal Induction & Intelligence

    Bayesian Sequence Prediction: Contents
                • Uncertainty and Probability
                • Frequency Interpretation: Counting
                • Objective Interpretation: Uncertain Events
                • Subjective Interpretation: Degrees of Belief
                • Bayes’ and Laplace’s Rules
                • Envelope Paradox
                • The Bayes-mixture distribution
                • Relative Entropy and Bound
                • Predictive Convergence
                • Sequential Decisions and Loss Bounds
                • Generalization: Continuous Probability Classes
                • Summary
Marcus Hutter             - 19 -            Universal Induction & Intelligence

    Bayesian Sequence Prediction: Abstract
The aim of probability theory is to describe uncertainty. There are
various sources and interpretations of uncertainty. I compare the
frequency, objective, and subjective probabilities, and show that they all
respect the same rules, and derive Bayes’ and Laplace’s famous and
fundamental rules. Then I concentrate on general sequence prediction
tasks. I define the Bayes mixture distribution and show that the
posterior converges rapidly to the true posterior by exploiting some
bounds on the relative entropy. Finally I show that the mixture predictor
is also optimal in a decision-theoretic sense w.r.t. any bounded loss
function.
Marcus Hutter             - 20 -            Universal Induction & Intelligence




                Uncertainty and Probability
The aim of probability theory is to describe uncertainty.
Sources/interpretations for uncertainty:
 • Frequentist: probabilities are relative frequencies.
   (e.g. the relative frequency of tossing head.)
 • Objectivist: probabilities are real aspects of the world.
   (e.g. the probability that some atom decays in the next hour)
 • Subjectivist: probabilities describe an agent’s degree of belief.
   (e.g. it is (im)plausible that extraterrestrians exist)
Marcus Hutter             - 21 -            Universal Induction & Intelligence

        Frequency Interpretation: Counting
 • The frequentist interprets probabilities as relative frequencies.
 • If in a sequence of n independent identically distributed (i.i.d.)
   experiments (trials) an event occurs k(n) times, the relative
   frequency of the event is k(n)/n.
 • The limit limn→∞ k(n)/n is defined as the probability of the event.
 • For instance, the probability of the event head in a sequence of
   repeatedly tossing a fair coin is 1 .
                                     2

 • The frequentist position is the easiest to grasp, but it has several
   shortcomings:
 • Problems: definition circular, limited to i.i.d, reference class
   problem.
Marcus Hutter             - 22 -            Universal Induction & Intelligence

  Objective Interpretation: Uncertain Events
 • For the objectivist probabilities are real aspects of the world.
 • The outcome of an observation or an experiment is not
   deterministic, but involves physical random processes.
 • The set Ω of all possible outcomes is called the sample space.
 • It is said that an event E ⊂ Ω occurred if the outcome is in E.
 • In the case of i.i.d. experiments the probabilities p assigned to
   events E should be interpretable as limiting frequencies, but the
   application is not limited to this case.
 • (Some) probability axioms:
   p(Ω) = 1 and p({}) = 0 and 0 ≤ p(E) ≤ 1.
   p(A ∪ B) = p(A) + p(B) − p(A ∩ B).
   p(B|A) = p(A∩B) is the probability of B given event A occurred.
              p(A)
Marcus Hutter             - 23 -             Universal Induction & Intelligence

 Subjective Interpretation: Degrees of Belief
 • The subjectivist uses probabilities to characterize an agent’s degree
   of belief in something, rather than to characterize physical random
   processes.
 • This is the most relevant interpretation of probabilities in AI.
 • We define the plausibility of an event as the degree of belief in the
   event, or the subjective probability of the event.
 • It is natural to assume that plausibilities/beliefs Bel(·|·) can be repr.
   by real numbers, that the rules qualitatively correspond to common
   sense, and that the rules are mathematically consistent. ⇒
 • Cox’s theorem: Bel(·|A) is isomorphic to a probability function
   p(·|·) that satisfies the axioms of (objective) probabilities.

 • Conclusion: Beliefs follow the same rules as probabilities
Marcus Hutter                 - 24 -             Universal Induction & Intelligence

                       Bayes’ Famous Rule
Let D be some possible data (i.e. D is event with p(D) > 0) and
{Hi }i∈I be a countable complete class of mutually exclusive hypotheses
(i.e. Hi are events with Hi ∩ Hj = {} ∀i = j and i∈I Hi = Ω).
Given: p(Hi ) = a priori plausibility of hypotheses Hi (subj. prob.)
Given: p(D|Hi ) = likelihood of data D under hypothesis Hi (obj. prob.)
Goal: p(Hi |D) = a posteriori plausibility of hypothesis Hi (subj. prob.)

                                                p(D|Hi )p(Hi )
                  Solution:    p(Hi |D) =
                                               i∈I p(D|Hi )p(Hi )


Proof: From the definition of conditional probability and

      p(Hi |...) = 1   ⇒            p(D|Hi )p(Hi ) =         p(Hi |D)p(D) = p(D)
i∈I                           i∈I                      i∈I
Marcus Hutter                    - 25 -                  Universal Induction & Intelligence

        Example: Bayes’ and Laplace’s Rule
Assume data is generated by a biased coin with head probability θ, i.e.
Hθ :=Bernoulli(θ) with θ ∈ Θ := [0, 1].
Finite sequence: x = x1 x2 ...xn with n1 ones and n0 zeros.
Sample infinite sequence: ω ∈ Ω = {0, 1}∞
Basic event: Γx = {ω : ω1 = x1 , ..., ωn = xn } = set of all sequences
starting with x.
Data likelihood: pθ (x) := p(Γx |Hθ ) = θn1 (1 − θ)n0 .
Bayes (1763): Uniform prior plausibility: p(θ) := p(Hθ ) = 1
                    R1                         P
                (   0
                         p(θ) dθ = 1 instead       i∈I   p(Hi ) = 1)
                         1                     1                              n1 !n0 !
Evidence: p(x) =         0
                             pθ (x)p(θ) dθ =   0
                                                   θn1 (1 − θ)n0 dθ =      (n0 +n1 +1)!
Marcus Hutter              - 26 -           Universal Induction & Intelligence

        Example: Bayes’ and Laplace’s Rule

Bayes: Posterior plausibility of θ
after seeing x is:
         p(x|θ)p(θ)   (n+1)! n1
p(θ|x) =            =           θ (1−θ)n0
            p(x)       n1 !n0 !

.


Laplace: What is the probability of seeing 1 after having observed x?
                                         p(x1)     n1 +1
                p(xn+1 = 1|x1 ...xn ) =         =
                                         p(x)      n+2
Laplace believed that the sun had risen for 5000 years = 1’826’213 days,
                                                                 1
so he concluded that the probability of doomsday tomorrow is 1826215 .
Marcus Hutter           - 27 -           Universal Induction & Intelligence



                Exercise: Envelope Paradox
 • I offer you two closed envelopes, one of them contains twice the
   amount of money than the other. You are allowed to pick one and
   open it. Now you have two options. Keep the money or decide for
   the other envelope (which could double or half your gain).
 • Symmetry argument: It doesn’t matter whether you switch, the
   expected gain is the same.
 • Refutation: With probability p = 1/2, the other envelope contains
   twice/half the amount, i.e. if you switch your expected gain
   increases by a factor 1.25=(1/2)*2+(1/2)*(1/2).
 • Present a Bayesian solution.
Marcus Hutter                - 28 -           Universal Induction & Intelligence

          The Bayes-Mixture Distribution ξ
 • Assumption: The true (objective) environment µ is unknown.
 • Bayesian approach: Replace true probability distribution µ by a
   Bayes-mixture ξ.
 • Assumption: We know that the true environment µ is contained in
   some known countable (in)finite set M of environments.
 • The Bayes-mixture ξ is defined as
       ξ(x1:m ) :=         wν ν(x1:m ) with         wν = 1,     wν > 0 ∀ν
                     ν∈M                      ν∈M

 • The weights wν may be interpreted as the prior degree of belief that
   the true environment is ν, or k ν = ln wν as a complexity penalty
                                           −1

   (prefix code length) of environment ν.
 • Then ξ(x1:m ) could be interpreted as the prior subjective belief
   probability in observing x1:m .
Marcus Hutter              - 29 -             Universal Induction & Intelligence

                Convergence and Decisions
Goal: Given seq. x1:t−1 ≡ x<t ≡ x1 x2 ...xt−1 , predict continuation xt .
Expectation w.r.t. µ: E[f (ω1:n )] :=    x∈X n   µ(x)f (x)
KL-divergence: Dn (µ||ξ) := E[ln µ(ω1:n ) ] ≤ ln wµ ∀n
                                 ξ(ω
                                     1:n )        −1


Hellinger distance: ht (ω<t ) :=    a∈X ( ξ(a|ω<t ) −        µ(a|ω<t ))2
                         ∞                             −1
Rapid convergence:       t=1   E[ht (ω<t )] ≤ D∞ ≤ ln wµ < ∞ implies
ξ(xt |ω<t ) → µ(xt |ω<t ), i.e. ξ is a good substitute for unknown µ.
Bayesian decisions: Bayes-optimal predictor Λξ suffers instantaneous
      Λ
loss lt ξ ∈ [0, 1] at t only slightly larger than the µ-optimal predictor Λµ :
   ∞                                 ∞
   t=1  E[( lt ξ − lt µ )2 ] ≤ t=1 2E[ht ] < ∞ implies rapid lt ξ → lt µ
               Λ         Λ                                          Λ
                                                                         . Λ
Pareto-optimality of Λξ : Every predictor with loss smaller than Λξ in
some environment µ ∈ M must be worse in another environment.
Marcus Hutter              - 30 -            Universal Induction & Intelligence

     Generalization: Continuous Classes M
In statistical parameter estimation one often has a continuous
hypothesis class (e.g. a Bernoulli(θ) process with unknown θ ∈ [0, 1]).

   M := {νθ : θ ∈ I d },
                  R          ξ(x) :=    dθ w(θ) νθ (x),      dθ w(θ) = 1
                                       I d
                                       R                    I d
                                                            R

Under weak regularity conditions [CB90,H’03]:
                                                 d      n
          Theorem: Dn (µ||ξ) ≤ ln w(µ)−1 +       2   ln 2π + O(1)

where O(1) depends on the local curvature (parametric complexity) of
ln νθ , and is independent n for many reasonable classes, including all
stationary (k th -order) finite-state Markov processes (k = 0 is i.i.d.).

Dn ∝ log(n) = o(n) still implies excellent prediction and decision for
most n.                                                            [RH’07]
Marcus Hutter            - 31 -            Universal Induction & Intelligence

    Bayesian Sequence Prediction: Summary
 • The aim of probability theory is to describe uncertainty.
 • Various sources and interpretations of uncertainty:
   frequency, objective, and subjective probabilities.
 • They all respect the same rules.
 • General sequence prediction: Use known (subj.) Bayes mixture
   ξ = ν∈M wν ν in place of unknown (obj.) true distribution µ.
 • Bound on the relative entropy between ξ and µ.
 ⇒ posterior of ξ converges rapidly to the true posterior µ.
 • ξ is also optimal in a decision-theoretic sense w.r.t. any bounded
   loss function.
 • No structural assumptions on M and ν ∈ M.
Marcus Hutter             - 32 -          Universal Induction & Intelligence




    Universal Inductive Inferences: Contents
            •   Foundations of Universal Induction
            •   Bayesian Sequence Prediction and Confirmation
            •   Fast Convergence
            •   How to Choose the Prior – Universal
            •   Kolmogorov Complexity
            •   How to Choose the Model Class – Universal
            •   Universal is Better than Continuous Class
            •   Summary / Outlook / Literature
Marcus Hutter             - 33 -            Universal Induction & Intelligence

    Universal Inductive Inferences: Abstract
Solomonoff completed the Bayesian framework by providing a rigorous,
unique, formal, and universal choice for the model class and the prior. I
will discuss in breadth how and in which sense universal (non-i.i.d.)
sequence prediction solves various (philosophical) problems of traditional
Bayesian sequence prediction. I show that Solomonoff’s model possesses
many desirable properties: Fast convergence, and in contrast to most
classical continuous prior densities has no zero p(oste)rior problem, i.e.
can confirm universal hypotheses, is reparametrization and regrouping
invariant, and avoids the old-evidence and updating problem. It even
performs well (actually better) in non-computable environments.
Marcus Hutter              - 34 -             Universal Induction & Intelligence

                      Induction Examples
Sequence prediction: Predict weather/stock-quote/... tomorrow, based
on past sequence. Continue IQ test sequence like 1,4,9,16,?
Classification: Predict whether email is spam.
Classification can be reduced to sequence prediction.
Hypothesis testing/identification: Does treatment X cure cancer?
Do observations of white swans confirm that all ravens are black?
These are instances of the important problem of inductive inference or
time-series forecasting or sequence prediction.
Problem: Finding prediction rules for every particular (new) problem is
possible but cumbersome and prone to disagreement or contradiction.
Goal: A single, formal, general, complete theory for prediction.
Beyond induction: active/reward learning, fct. optimization, game theory.
Marcus Hutter             - 35 -             Universal Induction & Intelligence

         Foundations of Universal Induction
        Ockhams’ razor (simplicity) principle
        Entities should not be multiplied beyond necessity.
        Epicurus’ principle of multiple explanations
        If more than one theory is consistent with the observations, keep
        all theories.
        Bayes’ rule for conditional probabilities
        Given the prior belief/probability one can predict all future prob-
        abilities.
        Turing’s universal machine
        Everything computable by a human using a fixed procedure can
        also be computed by a (universal) Turing machine.
        Kolmogorov’s complexity
        The complexity or information content of an object is the length
        of its shortest description on a universal Turing machine.
        Solomonoff’s universal prior=Ockham+Epicurus+Bayes+Turing
        Solves the question of how to choose the prior if nothing is known.
        ⇒ universal induction, formal Occam, AIT,MML,MDL,SRM,...
Marcus Hutter             - 36 -              Universal Induction & Intelligence

Bayesian Sequence Prediction and Confirmation
  • Assumption: Sequence ω ∈ X ∞ is sampled from the “true”
    probability measure µ, i.e. µ(x) := P[x|µ] is the µ-probability that
    ω starts with x ∈ X n .
  • Model class: We assume that µ is unknown but known to belong to
    a countable class of environments=models=measures
    M = {ν1 , ν2 , ...}.             [no i.i.d./ergodic/stationary assumption]
  • Hypothesis class: {Hν : ν ∈ M} forms a mutually exclusive and
    complete class of hypotheses.
  • Prior: wν := P[Hν ] is our prior belief in Hν
 ⇒ Evidence: ξ(x) := P[x] = ν∈M P[x|Hν ]P[Hν ] =                 ν   wν ν(x)
   must be our (prior) belief in x.
                                       P[x|Hν ]P[Hν ]
 ⇒ Posterior: wν (x) := P[Hν |x] =         P[x]         is our posterior belief
   in ν (Bayes’ rule).
Marcus Hutter            - 37 -            Universal Induction & Intelligence

                How to Choose the Prior?
 • Subjective: quantifying personal prior belief (not further discussed)
 • Objective: based on rational principles (agreed on by everyone)
                                                        1
 • Indifference or symmetry principle: Choose wν =      |M|   for finite M.
 • Jeffreys or Bernardo’s prior: Analogue for compact parametric
   spaces M.
 • Problem: The principles typically provide good objective priors for
   small discrete or compact spaces, but not for “large” model classes
   like countably infinite, non-compact, and non-parametric M.
 • Solution: Occam favors simplicity ⇒ Assign high (low) prior to
   simple (complex) hypotheses.
 • Problem: Quantitative and universal measure of simplicity/complexity.
Marcus Hutter            - 38 -            Universal Induction & Intelligence

                Kolmogorov Complexity K(x)
K. of string x is the length of the shortest (prefix) program producing x:
        K(x) := minp {l(p) : U (p) = x},     U = universal TM
For non-string objects o (like numbers and functions) we define
K(o) := K( o ), where o ∈ X ∗ is some standard code for o.
 + Simple strings like 000...0 have small K,
   irregular (e.g. random) strings have large K.
 • The definition is nearly independent of the choice of U .
 + K satisfies most properties an information measure should satisfy.
 + K shares many properties with Shannon entropy but is superior.
 − K(x) is not computable, but only semi-computable from above.

               K is an excellent universal complexity measure,
        Fazit:
                   suitable for quantifying Occam’s razor.
Marcus Hutter           - 39 -          Universal Induction & Intelligence

 Schematic Graph of Kolmogorov Complexity
    Although K(x) is incomputable, we can draw a schematic graph
Marcus Hutter            - 40 -            Universal Induction & Intelligence

                    The Universal Prior
 • Quantify the complexity of an environment ν or hypothesis Hν by
   its Kolmogorov complexity K(ν).

 • Universal prior: wν = wν := 2−K(ν) is a decreasing function in
                           U

   the model’s complexity, and sums to (less than) one.
⇒ Dn ≤ K(µ) ln 2, i.e. the number of ε-deviations of ξ from µ or lΛξ
  from lΛµ is proportional to the complexity of the environment.
 • No other semi-computable prior leads to better prediction (bounds).
 • For continuous M, we can assign a (proper) universal prior (not
   density) wθ = 2−K(θ) > 0 for computable θ, and 0 for uncomp. θ.
             U

                                                            U
 • This effectively reduces M to a discrete class {νθ ∈ M : wθ > 0}
   which is typically dense in M.
 • This prior has many advantages over the classical prior (densities).
Marcus Hutter             - 41 -                Universal Induction & Intelligence

                Universal Choice of Class M
 • The larger M the less restrictive is the assumption µ ∈ M.
 • The class MU of all (semi)computable (semi)measures, although
   only countable, is pretty large, since it includes all valid physics
   theories. Further, ξU is semi-computable [ZL70].
 • Solomonoff’s universal prior M (x) := probability that the output of
   a universal TM U with random input starts with x.

 • Formally: M (x) :=       p : U (p)=x∗ 2−   (p)
                                                    where the sum is over all
    (minimal) programs p for which U outputs a string starting with x.
 • M may be regarded as a 2− (p) -weighted mixture over all
   deterministic environments νp . (νp (x) = 1 if U (p) = x∗ and 0 else)
 • M (x) coincides with ξU (x) within an irrelevant multiplicative constant.
Marcus Hutter            - 42 -            Universal Induction & Intelligence

Universal is better than Continuous Class&Prior
  • Problem of zero prior / confirmation of universal hypotheses:
                                       ≡ 0 in Bayes-Laplace model
    P[All ravens black|n black ravens] f ast                     U
                                       −→ 1 for universal prior wθ
  • Reparametrization and regrouping invariance: wθ = 2−K(θ) always
                                                      U

    exists and is invariant w.r.t. all computable reparametrizations f .
    (Jeffrey prior only w.r.t. bijections, and does not always exist)
  • The Problem of Old Evidence: No risk of biasing the prior towards
                      U
    past data, since wθ is fixed and independent of M.
  • The Problem of New Theories: Updating of M is not necessary,
    since MU includes already all.
  • M predicts better than all other mixture predictors based on any
    (continuous or discrete) model class and prior, even in
    non-computable environments.
Marcus Hutter                   - 43 -                 Universal Induction & Intelligence

            Convergence and× Loss Bounds
                                  ∞
 • Total (loss) bounds:     n=1 E[hn ] < K(µ) ln 2, where
   ht (ω<t ) := a∈X ( ξ(a|ω<t ) − µ(a|ω<t ))2 .
 • Instantaneous i.i.d. bounds: For i.i.d. M with continuous, discrete,
   and universal prior, respectively:
           × 1                                  × 1              1
                           −1                             −1
    E[hn ] < n   ln w(µ)        and      E[hn ] < n   ln wµ =    n K(µ) ln 2.
 • Bounds for computable environments: Rapidly M (xt |x<t ) → 1 on
   every computable sequence x1:∞ (whichsoever, e.g. 1∞ or the digits
   of π or e), i.e. M quickly recognizes the structure of the sequence.
 • Weak instantaneous bounds: valid for all n and x1:n and xn = xn :
                                                              ¯
            ×               ×
   2−K(n) < M (¯n |x<n ) < 22K(x1:n ∗)−K(n)
                   x
                                            ×
 • Magic instance numbers: e.g. M (0|1n ) = 2−K(n) → 0, but spikes
   up for simple n. M is cautious at magic instance numbers n.
 • Future bounds / errors to come: If our past observations ω1:n
   contain a lot of information about µ, we make few errors in future:
                         +
      ∞
      t=n+1 E[ht |ω1:n ] < [K(µ|ω1:n )+K(n)] ln 2
Marcus Hutter             - 44 -            Universal Induction & Intelligence

    Universal Inductive Inference: Summary
Universal Solomonoff prediction solves/avoids/meliorates many problems
of (Bayesian) induction. We discussed:
 + general total bounds for generic class, prior, and loss,
 + the Dn bound for continuous classes,
 + the problem of zero p(oste)rior & confirm. of universal hypotheses,
 + reparametrization and regrouping invariance,
 + the problem of old evidence and updating,
 + that M works even in non-computable environments,
 + how to incorporate prior knowledge,
Marcus Hutter            - 45 -           Universal Induction & Intelligence




   The Universal Similarity Metric: Contents
           • Kolmogorov Complexity
           • The Universal Similarity Metric
           • Tree-Based Clustering
           • Genomics & Phylogeny: Mammals, SARS Virus & Others
           • Classification of Different File Types
           • Language Tree (Re)construction
           • Classify Music w.r.t. Composer
           • Further Applications
           • Summary
Marcus Hutter            - 46 -            Universal Induction & Intelligence

   The Universal Similarity Metric: Abstract
The MDL method has been studied from very concrete and highly tuned
practical applications to general theoretical assertions. Sequence
prediction is just one application of MDL. The MDL idea has also been
used to define the so called information distance or universal similarity
metric, measuring the similarity between two individual objects. I will
present some very impressive recent clustering applications based on
standard Lempel-Ziv or bzip2 compression, including a completely
automatic reconstruction (a) of the evolutionary tree of 24 mammals
based on complete mtDNA, and (b) of the classification tree of 52
languages based on the declaration of human rights and (c) others.

                    Based on [Cilibrasi&Vitanyi’05]
Marcus Hutter             - 47 -              Universal Induction & Intelligence

        Conditional Kolmogorov Complexity
Question: When is object=string x similar to object=string y?
Universal solution: x similar y ⇔ x can be easily (re)constructed from y
⇔ Kolmogorov complexity K(x|y) := min{ (p) : U (p, y) = x} is small
Examples:
                                       +
1) x is very similar to itself (K(x|x) = 0)
                                                 +
2) A processed x is similar to x (K(f (x)|x) = 0 if K(f ) = O(1)).
   e.g. doubling, reverting, inverting, encrypting, partially deleting x.
3) A random string is with high probability not similar to any other
   string (K(random|y) =length(random)).
The problem with K(x|y) as similarity=distance measure is that it is
neither symmetric nor normalized nor computable.
Marcus Hutter             - 48 -          Universal Induction & Intelligence

            The Universal Similarity Metric
 • Symmetrization and normalization leads to a/the universal metric d:
                                max{K(x|y), K(y|x)}
                 0 ≤ d(x, y) :=                     ≤ 1
                                 max{K(x), K(y)}
 • Every effective similarity between x and y is detected by d
 • Use K(x|y) ≈ K(xy)−K(y) (coding T) and K(x) ≡ KU (x) ≈ KT (x)
   =⇒ computable approximation: Normalized compression distance:
                          KT (xy) − min{KT (x), KT (y)}
                d(x, y) ≈                                      1
                               max{KT (x), KT (y)}
 • For T choose Lempel-Ziv or gzip or bzip(2) (de)compressor in the
   applications below.
 • Theory: Lempel-Ziv compresses asymptotically better than any
   probabilistic finite state automaton predictor/compressor.
Marcus Hutter              - 49 -              Universal Induction & Intelligence


                   Tree-Based Clustering

 • If many objects x1 , ..., xn need to be compared, determine the

           similarity matrix        Mij = d(xi , xj ) for 1 ≤ i, j ≤ n


 • Now cluster similar objects.


 • There are various clustering techniques.


 • Tree-based clustering: Create a tree connecting similar objects,


 • e.g. quartet method (for clustering)
Marcus Hutter                  - 50 -              Universal Induction & Intelligence

          Genomics & Phylogeny: Mammals
Let x1 , ..., xn be mitochondrial genome sequences of different mammals:
Partial distance matrix Mij using bzip2(?)
                            Cat             Echidna           Gorilla               ...
           BrownBear           Chimpanzee         FinWhale         HouseMouse       ...
                   Carp                 Cow              Gibbon             Human   ...
   BrownBear 0.002 0.943   0.887 0.935 0.906 0.944 0.915 0.939 0.940 0.934 0.930    ...
        Carp 0.943 0.006   0.946 0.954 0.947 0.955 0.952 0.951 0.957 0.956 0.946    ...
         Cat 0.887 0.946   0.003 0.926 0.897 0.942 0.905 0.928 0.931 0.919 0.922    ...
  Chimpanzee 0.935 0.954   0.926 0.006 0.926 0.948 0.926 0.849 0.731 0.943 0.667    ...
         Cow 0.906 0.947   0.897 0.926 0.006 0.936 0.885 0.931 0.927 0.925 0.920    ...
     Echidna 0.944 0.955   0.942 0.948 0.936 0.005 0.936 0.947 0.947 0.941 0.939    ...
FinbackWhale 0.915 0.952   0.905 0.926 0.885 0.936 0.005 0.930 0.931 0.933 0.922    ...
      Gibbon 0.939 0.951   0.928 0.849 0.931 0.947 0.930 0.005 0.859 0.948 0.844    ...
     Gorilla 0.940 0.957   0.931 0.731 0.927 0.947 0.931 0.859 0.006 0.944 0.737    ...
  HouseMouse 0.934 0.956   0.919 0.943 0.925 0.941 0.933 0.948 0.944 0.006 0.932    ...
       Human 0.930 0.946   0.922 0.667 0.920 0.939 0.922 0.844 0.737 0.932 0.005    ...
         ... ...   ...     ...   ...   ...   ...   ...   ...   ...    ...   ...     ...
Marcus Hutter           - 51 -                    Universal Induction & Intelligence

           Genomics & Phylogeny: Mammals
Evolutionary tree built from complete mammalian mtDNA of 24 species:
              Carp
              Cow
        BlueWhale
    FinbackWhale
               Cat                 Ferungulates
        BrownBear
         PolarBear
         GreySeal
       HarborSeal                             Eutheria
             Horse
       WhiteRhino
            Gibbon
            Gorilla                Primates
            Human
      Chimpanzee
      PygmyChimp
        Orangutan
SumatranOrangutan
      HouseMouse                   Eutheria - Rodents
               Rat
         Opossum           Metatheria
          Wallaroo
           Echidna         Prototheria
          Platypus
Marcus Hutter            - 52 -             Universal Induction & Intelligence

Genomics & Phylogeny: SARS Virus and Others

  • Clustering of SARS virus in relation to potential similar virii based
    on complete sequenced genome(s) using bzip2:


  • The relations are very similar to the definitive tree based on
    medical-macrobio-genomics analysis from biologists.
Marcus Hutter                        - 53 -                      Universal Induction & Intelligence

Genomics & Phylogeny: SARS Virus and Others
                                         SARSTOR2v120403
                                                   HumanCorona1
      MurineHep11        RatSialCorona

                                                     n8
                                                                           AvianIB2
                   n10
                                 n7
   MurineHep2
                                                                                     AvianIB1
                                                n2
                                                                          n13



                                                       n0          n5
                                                                                       PRD1

                                                                                n4
            MeaslesSch
                               n12           n3

                                                                                       n9        SIRV1
                MeaslesMora

                                             n11
                                                                                         SIRV2
                                                     DuckAdeno1

                                      n1
                  AvianAdeno1CELO


                                           n6


                              HumanAdeno40      BovineAdeno3
Marcus Hutter            - 54 -            Universal Induction & Intelligence

       Classification of Different File Types
Classification of files based on markedly different file types using bzip2

• Four mitochondrial gene sequences

• Four excerpts from the novel “The Zeppelin’s Passenger”

• Four MIDI files without further processing

• Two Linux x86 ELF executables (the cp and rm commands)

• Two compiled Java class files

No features of any specific domain of application are used!
Marcus Hutter                      - 55 -                        Universal Induction & Intelligence

       Classification of Different File Types
                                                 GenesFoxC
                                                                  GenesRatD



                MusicHendrixB                              n10

                                                                                   GenesPolarBearB
       MusicHendrixA
                             n0                            n5                n13
                                                                                    GenesBlackBearA
                                            n3


   MusicBergB

                   n8                n2

  MusicBergA
                                                                       ELFExecutableB

                                                 n7              n12

                                                                       ELFExecutableA


                                            n1
      JavaClassB        n6


                                                      n4
          JavaClassA                                                                  TextD
                                                                       n11


                                                 n9
                                                                              TextC

                                  TextB
                                                      TextA

Perfect classification!
Marcus Hutter            - 56 -            Universal Induction & Intelligence

           Language Tree (Re)construction

 • Let x1 , ..., xn be the “The Universal Declaration of Human Rights”
   in various languages 1, ..., n.


 • Distance matrix Mij based on gzip. Language tree constructed
   from Mij by the Fitch-Margoliash method [Li&al’03]


 • All main linguistic groups can be recognized (next slide)
Basque [Spain]
                        Hungarian [Hungary]
                        Polish [Poland]
                        Sorbian [Germany]
          SLAVIC        Slovak [Slovakia]
                        Czech [Czech Rep]
                        Slovenian [Slovenia]
                        Serbian [Serbia]
                        Bosnian [Bosnia]
                        Croatian [Croatia]
                        Romani Balkan [East Europe]
                        Albanian [Albany]
               BALTIC   Lithuanian [Lithuania]
                        Latvian [Latvia]
               ALTAIC   Turkish [Turkey]
                        Uzbek [Utzbekistan]
                        Breton [France]
                        Maltese [Malta]
                        OccitanAuvergnat [France]
                        Walloon [Belgique]
                        English [UK]
                        French [France]
ROMANCE                 Asturian [Spain]
                        Portuguese [Portugal]
                        Spanish [Spain]
                        Galician [Spain]
                        Catalan [Spain]
                        Occitan [France]
                        Rhaeto Romance [Switzerland]
                        Friulian [Italy]
                        Italian [Italy]
                        Sammarinese [Italy]
                        Corsican [France]
                        Sardinian [Italy]
                        Romanian [Romania]
                        Romani Vlach [Macedonia]
              CELTIC    Welsh [UK]
                        Scottish Gaelic [UK]
                        Irish Gaelic [UK]
                        German [Germany]
                        Luxembourgish [Luxembourg]
                        Frisian [Netherlands]
                        Dutch [Netherlands]
                        Afrikaans
   GERMANIC             Swedish [Sweden]
                        Norwegian Nynorsk [Norway]
                        Danish [Denmark]
                        Norwegian Bokmal [Norway]
                        Faroese [Denmark]
                        Icelandic [Iceland]
          UGROFINNIC    Finnish [Finland]
                        Estonian [Estonia]
Marcus Hutter               - 58 -              Universal Induction & Intelligence

            Classify Music w.r.t. Composer
Let m1 , ..., mn be pieces of music in MIDI format.
Preprocessing the MIDI files:
  • Delete identifying information (composer, title, ...), instrument
    indicators, MIDI control signals, tempo variations, ...
  • Keep only note-on and note-off information.
  • A note, k ∈ Z half-tones above the average note is coded as a
                 Z
    signed byte with value k.
  • The whole piece is quantized in 0.05 second intervals.
  • Tracks are sorted according to decreasing average volume, and then
    output in succession.
Processed files x1 , ..., xn still sounded like the original.
Marcus Hutter                     - 59 -           Universal Induction & Intelligence

               Classify Music w.r.t. Composer
12 pieces of music: 4×Bach + 4×Chopin + 4×Debussy. Class. by bzip2
                            DebusBerg4

               DebusBerg1

                                                  ChopPrel1            ChopPrel22
                             n7


                                                                  n3
  DebusBerg2                                         n6                    ChopPrel24
                                  n4
                   n2
                                            n1

    DebusBerg3


                                             n9    ChopPrel15




                                             n8


                                       n5            n0
                                                              BachWTK2P1
                   BachWTK2F2



                             BachWTK2F1           BachWTK2P2



Perfect grouping of processed MIDI files w.r.t. composers.
Marcus Hutter           - 60 -             Universal Induction & Intelligence

                  Further Applications
                • Classification of Fungi
                • Optical character recognition
                • Classification of Galaxies
                • Clustering of novels w.r.t. authors
                • Larger data sets


                  See [Cilibrasi&Vitanyi’03]
Marcus Hutter             - 61 -            Universal Induction & Intelligence

         The Clustering Method: Summary

          • based on the universal similarity metric,
          • based on Kolmogorov complexity,
          • approximated by bzip2,
          • with the similarity matrix represented by tree,
          • approximated by the quartet method


          • leads to excellent classification in many domains.
Marcus Hutter          - 62 -             Universal Induction & Intelligence

       Universal Rational Agents: Contents
                • Rational agents
                • Sequential decision theory
                • Reinforcement learning
                • Value function
                • Universal Bayes mixture and AIXI model
                • Self-optimizing policies
                • Pareto-optimality
                • Environmental Classes
Marcus Hutter              - 63 -            Universal Induction & Intelligence

        Universal Rational Agents: Abstract
Sequential decision theory formally solves the problem of rational agents
in uncertain worlds if the true environmental prior probability distribution
is known. Solomonoff’s theory of universal induction formally solves the
problem of sequence prediction for unknown prior distribution.
Here we combine both ideas and develop an elegant parameter-free
theory of an optimal reinforcement learning agent embedded in an
arbitrary unknown environment that possesses essentially all aspects of
rational intelligence. The theory reduces all conceptual AI problems to
pure computational ones.
We give strong arguments that the resulting AIXI model is the most
intelligent unbiased agent possible. Other discussed topics are relations
between problem classes.
Marcus Hutter                 - 64 -                  Universal Induction & Intelligence

                          The Agent Model

         r1 | o1   r 2 | o2    r 3 | o3     r4 | o4   r 5 | o5   r 6 | o6   ...
                                ¨¨
                                                   r
                                                   ‰r
                              ¨                       r
                       ¨
                       %¨                                 rr
                    Agent                                  Environ-
          work                   tape ...          work             tape ...
                      p                                    ment q
                      €€                                    
                                                             I
                         €                                
                                €€
                                       €€       
                                        q
           y1        y2          y3           y4          y5       y6       ...



Most if not all AI problems can be formulated within the agent
framework
Marcus Hutter              - 65 -             Universal Induction  Intelligence

Rational Agents in Deterministic Environments

  - p : X ∗ → Y ∗ is deterministic policy of the agent,
    p(xk ) = y1:k with xk ≡ x1 ...xk−1 .
  - q : Y ∗ → X ∗ is deterministic environment,
    q(y1:k ) = x1:k with y1:k ≡ y1 ...yk .
  - Input xk ≡ rk ok consists of a regular informative part ok
    and reward rk ∈ [0..rmax ].
            pq
  - Value Vkm := rk + ... + rm ,
                                       pq
    optimal policy pbest := arg maxp V1m ,
    Lifespan or initial horizon m.
Marcus Hutter                  - 66 -             Universal Induction  Intelligence

       Agents in Probabilistic Environments
Given history y1:k xk , the probability that the environment leads to
perception xk in cycle k is (by definition) σ(xk |y1:k xk ).
Abbreviation (chain rule)

      σ(x1:m |y1:m ) = σ(x1 |y1 )·σ(x2 |y1:2 x1 )· ... ·σ(xm |y1:m xm )



The average value of policy p with horizon m in environment σ is
defined as
          p       1
         Vσ :=    m          (r1 + ... +rm )σ(x1:m |y1:m )|y1:m =p(xm )
                      x1:m

The goal of the agent should be to maximize the value.
Marcus Hutter                     - 67 -                    Universal Induction  Intelligence

                     Optimal Policy and Value
                             σ                 p                       p        ∗         pσ
The σ-optimal policy p :=            arg maxp Vσ          maximizes   Vσ   ≤   Vσ   :=   Vσ .
Explicit expressions for the action yk in cycle k of the σ-optimal policy
pσ and their value Vσ are
                      ∗


yk = arg max              max           ... max         (rk + ... +rm )·σ(xk:m |y1:m xk ),
              yk          yk+1               ym
                     xk          xk+1              xm


   ∗      1
  Vσ =    m   max          max             ... max        (r1 + ... +rm )·σ(x1:m |y1:m ).
                y1          y2                ym
                      x1           x2                xm

Keyword: Expectimax tree/algorithm.
Marcus Hutter                - 68 -               Universal Induction  Intelligence

                  Expectimax Tree/Algorithm

                        r Vσ (yxk ) = max Vσ (yxk yk )
                             ∗                 ∗

                      d
                     max
                                         yk

                     {z d action y with max value.
                   |       }
                                         k
                 k= 0 yk= 1
                 y            d
                                 d                     X
           q                      dq Vσ (yxk yk ) = [rk + Vσ (yx1:k )]σ(xk |yxk yk )
                                           ∗                     ∗

          ¡e                        ¡e                  xk
          E                         E
        ¡|{z}e                    ¡|{z}e σ expected reward r and observation o .
                                                              k                  k
      ¡k= 0 e ok= 1
       o                  ok= 0 ¡ ok= 1 e
     ¡rk= ... e rk= ... rk= ... ¡ rk= ... e
   q¡           eq             ¡
                               q           eq Vσ (yx1:k ) = max Vσ (yx1:k yk+1 )
                                                 ∗               ∗
                                                            yk+1
  ¡e yk+1 ¡e yk+1 ¡e yk+1 ¡e
 ¡maxe
 ¡ e           ¡maxe
               ¡ e         ¡maxe
                          ¡ e            ¡maxe
                                        ¡ e
···   ···   ···   ···   ···    ···      ···   ···
Marcus Hutter            - 69 -            Universal Induction  Intelligence

                 Known environment µ

 • Assumption: µ is the true environment in which the agent operates

 • Then, policy pµ is optimal in the sense that no other policy for an
   agent leads to higher µAI -expected reward.

 • Special choices of µ: deterministic or adversarial environments,
   Markov decision processes (mdps), adversarial environments.

 • There is no principle problem in computing the optimal action yk as
   long as µAI is known and computable and X , Y and m are finite.

 • Things drastically change if µAI is unknown ...
Marcus Hutter                 - 70 -             Universal Induction  Intelligence

          The Bayes-mixture distribution ξ
Assumption: The true environment µ is unknown.
Bayesian approach: The true probability distribution µAI is not learned
directly, but is replaced by a Bayes-mixture ξ AI .
Assumption: We know that the true environment µ is contained in some
known (finite or countable) set M of environments.
The Bayes-mixture ξ is defined as

ξ(x1:m |y1:m ) :=         wν ν(x1:m |y1:m )   with         wν = 1,     wν  0 ∀ν
                    ν∈M                              ν∈M

The weights wν may be interpreted as the prior degree of belief that the
true environment is ν.
Then ξ(x1:m |y1:m ) could be interpreted as the prior subjective belief
probability in observing x1:m , given actions y1:m .
Marcus Hutter              - 71 -            Universal Induction  Intelligence

                   Questions of Interest

 • It is natural to follow the policy pξ which maximizes Vξp .
 • If µ is the true environment the expected reward when following
            ξ          pξ
   policy p will be Vµ .
                                         µ                   pµ      ∗
 • The optimal (but infeasible) policy p yields reward      Vµ    ≡ Vµ .
                                                           pξ
 • Are there policies with uniformly larger value than    Vµ ?
                   pξ       ∗
 • How close is   Vµ    to Vµ ?
 • What is the most general class M and weights wν .
Marcus Hutter           - 72 -            Universal Induction  Intelligence

            A universal choice of ξ and M
 • We have to assume the existence of some structure on the
   environment to avoid the No-Free-Lunch Theorems [Wolpert 96].
 • We can only unravel effective structures which are describable by
   (semi)computable probability distributions.
 • So we may include all (semi)computable (semi)distributions in M.
 • Occam’s razor and Epicurus’ principle of multiple explanations tell
   us to assign high prior belief to simple environments.
 • Using Kolmogorov’s universal complexity measure K(ν) for
   environments ν one should set wν ∼ 2−K(ν) , where K(ν) is the
   length of the shortest program on a universal TM computing ν.
 • The resulting AIXI model [Hutter:00] is a unification of (Bellman’s)
   sequential decision and Solomonoff’s universal induction theory.
Marcus Hutter                - 73 -               Universal Induction  Intelligence

                The AIXI Model in one Line
                The most intelligent unbiased learning agent
       yk = arg max          ... max     [r(xk )+...+r(xm )]         2−   (q)
                     yk x       ym x                   q : U (q,y1:m )=x1:m
                         k           m

                  is an elegant mathematical theory of AI

Claim: AIXI is the most intelligent environmental independent, i.e.
universally optimal, agent possible.

Proof: For formalizations, quantifications, and proofs, see [Hut05].

Applications: Strategic Games, Function Minimization, Supervised
Learning from Examples, Sequence Prediction, Classification.

            In the following we consider generic M and wν .
Marcus Hutter              - 74 -           Universal Induction  Intelligence
                                                       ξ
                   Pareto-Optimality of p
Policy pξ is Pareto-optimal in the sense that there is no other policy p
       p      pξ
with Vν ≥ Vν for all ν ∈ M and strict inequality for at least one ν.

                   Self-optimizing Policies
Under which circumstances does the value of the universal policy pξ
converge to optimum?
              pξ
            Vν     → Vν∗   for horizon m → ∞      for all   ν ∈ M.        (1)
The least we must demand from M to have a chance that (1) is true is
that there exists some policy p at all with this property, i.e.
                              ˜

        ∃˜ : Vνp → Vν∗
         p     ˜
                           for horizon m → ∞      for all   ν ∈ M.        (2)
Main result: (2) ⇒ (1): The necessary condition of the existence of a
self-optimizing policy p is also sufficient for pξ to be self-optimizing.
                       ˜
Marcus Hutter   - 75 -    Universal Induction  Intelligence

Environments w. (Non)Self-Optimizing Policies
Marcus Hutter                 - 76 -           Universal Induction  Intelligence

       Particularly Interesting Environments
 • Sequence Prediction, e.g. weather or stock-market prediction.
                      ∗        pξ          K(µ)
    Strong result:   Vµ   −   Vµ    = O(    m ),   m =horizon.

 • Strategic Games: Learn to play well (minimax) strategic zero-sum
   games (like chess) or even exploit limited capabilities of opponent.
 • Optimization: Find (approximate) minimum of function with as few
   function calls as possible. Difficult exploration versus exploitation
   problem.
 • Supervised learning: Learn functions by presenting (z, f (z)) pairs
   and ask for function values of z by presenting (z , ?) pairs.
   Supervised learning is much faster than reinforcement learning.
AIξ quickly learns to predict, play games, optimize, and learn supervised.
Marcus Hutter            - 77 -            Universal Induction  Intelligence

       Universal Rational Agents: Summary
 • Setup: Agents acting in general probabilistic environments with
   reinforcement feedback.
 • Assumptions: Unknown true environment µ belongs to a known
   class of environments M.
 • Results: The Bayes-optimal policy pξ based on the Bayes-mixture
   ξ = ν∈M wν ν is Pareto-optimal and self-optimizing if M admits
   self-optimizing policies.
 • We have reduced the AI problem to pure computational questions
   (which are addressed in the time-bounded AIXItl).
 • AIξ incorporates all aspects of intelligence (apart comp.-time).
 • How to choose horizon: use future value and universal discounting.
 • ToDo: prove (optimality) properties, scale down, implement.
Marcus Hutter            - 78 -            Universal Induction  Intelligence

                             Wrap Up
 • Setup: Given (non)iid data D = (x1 , ..., xn ), predict xn+1
 • Ultimate goal is to maximize profit or minimize loss
 • Consider Models/Hypothesis Hi ∈ M
 • Max.Likelihood: Hbest = arg maxi p(D|Hi ) (overfits if M large)
 • Bayes: Posterior probability of Hi is p(Hi |D) ∝ p(D|Hi )p(Hi )
 • Bayes needs prior(Hi )
 • Occam+Epicurus: High prior for simple models.
 • Kolmogorov/Solomonoff: Quantification of simplicity/complexity
 • Bayes works if D is sampled from Htrue ∈ M
 • Universal AI = Universal Induction + Sequential Decision Theory
Marcus Hutter              - 79 -             Universal Induction  Intelligence

                              Literature
[CV05]   R. Cilibrasi and P. M. B. Vit´nyi. Clustering by compression. IEEE
                                      a
         Trans. Information Theory, 51(4):1523–1545, 2005.
         http://arXiv.org/abs/cs/0312044
[Hut05] M. Hutter. Universal Artificial Intelligence: Sequential Decisions
        based on Algorithmic Probability. Springer, Berlin, 2005.
        http://www.hutter1.net/ai/uaibook.htm.
[Hut07] M. Hutter. On universal prediction and Bayesian confirmation.
        Theoretical Computer Science, 384(1):33–48, 2007.
        http://arxiv.org/abs/0709.1516
[LH07]   S. Legg and M. Hutter. Universal intelligence: a definition of
         machine intelligence. Minds  Machines, 17(4):391–444, 2007.
         http://dx.doi.org/10.1007/s11023-007-9079-x
Marcus Hutter            - 80 -            Universal Induction  Intelligence

           Thanks!         Questions?            Details:

Jobs: PostDoc and PhD positions at RSISE and NICTA, Australia


Projects at http://www.hutter1.net/



    A Unified View of Artificial Intelligence
         =                          =
  Decision Theory    = Probability + Utility Theory
         +                          +
 Universal Induction = Ockham + Bayes + Turing


Open research problems at www.hutter1.net/ai/uaibook.htm
Compression competition with 50’000 Euro prize at prize.hutter1.net

Más contenido relacionado

La actualidad más candente

Concept learning and candidate elimination algorithm
Concept learning and candidate elimination algorithmConcept learning and candidate elimination algorithm
Concept learning and candidate elimination algorithmswapnac12
 
Knowledge based system(Expert System)
Knowledge based system(Expert System)Knowledge based system(Expert System)
Knowledge based system(Expert System)chauhankapil
 
Deep neural networks
Deep neural networksDeep neural networks
Deep neural networksSi Haem
 
Probabilistic models (part 1)
Probabilistic models (part 1)Probabilistic models (part 1)
Probabilistic models (part 1)KU Leuven
 
Neural Networks: Multilayer Perceptron
Neural Networks: Multilayer PerceptronNeural Networks: Multilayer Perceptron
Neural Networks: Multilayer PerceptronMostafa G. M. Mostafa
 
Genetic Algorithms - Artificial Intelligence
Genetic Algorithms - Artificial IntelligenceGenetic Algorithms - Artificial Intelligence
Genetic Algorithms - Artificial IntelligenceSahil Kumar
 
Soft Computing
Soft ComputingSoft Computing
Soft ComputingMANISH T I
 
Neuro-fuzzy systems
Neuro-fuzzy systemsNeuro-fuzzy systems
Neuro-fuzzy systemsSagar Ahire
 
Reinforcement learning
Reinforcement learning Reinforcement learning
Reinforcement learning Chandra Meena
 
Lecture 1: What is Machine Learning?
Lecture 1: What is Machine Learning?Lecture 1: What is Machine Learning?
Lecture 1: What is Machine Learning?Marina Santini
 
An overview of Hidden Markov Models (HMM)
An overview of Hidden Markov Models (HMM)An overview of Hidden Markov Models (HMM)
An overview of Hidden Markov Models (HMM)ananth
 
Machine learning Lecture 1
Machine learning Lecture 1Machine learning Lecture 1
Machine learning Lecture 1Srinivasan R
 
Svm Presentation
Svm PresentationSvm Presentation
Svm Presentationshahparin
 
Introduction to Deep Learning
Introduction to Deep LearningIntroduction to Deep Learning
Introduction to Deep LearningOswald Campesato
 
MACHINE LEARNING - GENETIC ALGORITHM
MACHINE LEARNING - GENETIC ALGORITHMMACHINE LEARNING - GENETIC ALGORITHM
MACHINE LEARNING - GENETIC ALGORITHMPuneet Kulyana
 

La actualidad más candente (20)

Concept learning and candidate elimination algorithm
Concept learning and candidate elimination algorithmConcept learning and candidate elimination algorithm
Concept learning and candidate elimination algorithm
 
AI: AI & Problem Solving
AI: AI & Problem SolvingAI: AI & Problem Solving
AI: AI & Problem Solving
 
Machine Learning
Machine LearningMachine Learning
Machine Learning
 
Knowledge based system(Expert System)
Knowledge based system(Expert System)Knowledge based system(Expert System)
Knowledge based system(Expert System)
 
Deep neural networks
Deep neural networksDeep neural networks
Deep neural networks
 
Cuckoo search algorithm
Cuckoo search algorithmCuckoo search algorithm
Cuckoo search algorithm
 
Probabilistic models (part 1)
Probabilistic models (part 1)Probabilistic models (part 1)
Probabilistic models (part 1)
 
Neural Networks: Multilayer Perceptron
Neural Networks: Multilayer PerceptronNeural Networks: Multilayer Perceptron
Neural Networks: Multilayer Perceptron
 
Genetic Algorithms - Artificial Intelligence
Genetic Algorithms - Artificial IntelligenceGenetic Algorithms - Artificial Intelligence
Genetic Algorithms - Artificial Intelligence
 
Soft Computing
Soft ComputingSoft Computing
Soft Computing
 
Neuro-fuzzy systems
Neuro-fuzzy systemsNeuro-fuzzy systems
Neuro-fuzzy systems
 
Reinforcement learning
Reinforcement learning Reinforcement learning
Reinforcement learning
 
Lecture 1: What is Machine Learning?
Lecture 1: What is Machine Learning?Lecture 1: What is Machine Learning?
Lecture 1: What is Machine Learning?
 
An overview of Hidden Markov Models (HMM)
An overview of Hidden Markov Models (HMM)An overview of Hidden Markov Models (HMM)
An overview of Hidden Markov Models (HMM)
 
Machine learning Lecture 1
Machine learning Lecture 1Machine learning Lecture 1
Machine learning Lecture 1
 
AI Lecture 7 (uncertainty)
AI Lecture 7 (uncertainty)AI Lecture 7 (uncertainty)
AI Lecture 7 (uncertainty)
 
What is Machine Learning
What is Machine LearningWhat is Machine Learning
What is Machine Learning
 
Svm Presentation
Svm PresentationSvm Presentation
Svm Presentation
 
Introduction to Deep Learning
Introduction to Deep LearningIntroduction to Deep Learning
Introduction to Deep Learning
 
MACHINE LEARNING - GENETIC ALGORITHM
MACHINE LEARNING - GENETIC ALGORITHMMACHINE LEARNING - GENETIC ALGORITHM
MACHINE LEARNING - GENETIC ALGORITHM
 

Destacado

Design of reinforced concrete as per aci 318
Design of reinforced concrete as per aci 318Design of reinforced concrete as per aci 318
Design of reinforced concrete as per aci 318Jose Fabricio
 
W.h.mosley, j.h.bungey. reinforced concrete design book
W.h.mosley, j.h.bungey. reinforced concrete design bookW.h.mosley, j.h.bungey. reinforced concrete design book
W.h.mosley, j.h.bungey. reinforced concrete design bookReyam AL Mousawi
 
Theory and practice_of_foundation_design_readable
Theory and practice_of_foundation_design_readableTheory and practice_of_foundation_design_readable
Theory and practice_of_foundation_design_readableSaeed Nassery
 
Foundation design part_1
Foundation design part_1Foundation design part_1
Foundation design part_1shanasri4
 
Simplified design of reinforced concrete buildings
Simplified design of reinforced concrete buildings Simplified design of reinforced concrete buildings
Simplified design of reinforced concrete buildings Sarmed Shukur
 
Engineering formula sheet
Engineering formula sheetEngineering formula sheet
Engineering formula sheetsankalptiwari
 

Destacado (6)

Design of reinforced concrete as per aci 318
Design of reinforced concrete as per aci 318Design of reinforced concrete as per aci 318
Design of reinforced concrete as per aci 318
 
W.h.mosley, j.h.bungey. reinforced concrete design book
W.h.mosley, j.h.bungey. reinforced concrete design bookW.h.mosley, j.h.bungey. reinforced concrete design book
W.h.mosley, j.h.bungey. reinforced concrete design book
 
Theory and practice_of_foundation_design_readable
Theory and practice_of_foundation_design_readableTheory and practice_of_foundation_design_readable
Theory and practice_of_foundation_design_readable
 
Foundation design part_1
Foundation design part_1Foundation design part_1
Foundation design part_1
 
Simplified design of reinforced concrete buildings
Simplified design of reinforced concrete buildings Simplified design of reinforced concrete buildings
Simplified design of reinforced concrete buildings
 
Engineering formula sheet
Engineering formula sheetEngineering formula sheet
Engineering formula sheet
 

Similar a Foundations of Machine Learning

Foundations of Induction
Foundations of InductionFoundations of Induction
Foundations of Inductionmahutte
 
Foundations of Intelligence Agents
Foundations of Intelligence AgentsFoundations of Intelligence Agents
Foundations of Intelligence Agentsmahutte
 
Module-1.1.pdf of aiml engineering mod 1
Module-1.1.pdf of aiml engineering mod 1Module-1.1.pdf of aiml engineering mod 1
Module-1.1.pdf of aiml engineering mod 1fariyaPatel
 
introduction.pptx
introduction.pptxintroduction.pptx
introduction.pptxsecurework
 
Introduction.doc
Introduction.docIntroduction.doc
Introduction.docbutest
 
Some Take-Home Message about Machine Learning
Some Take-Home Message about Machine LearningSome Take-Home Message about Machine Learning
Some Take-Home Message about Machine LearningGianluca Bontempi
 
SoftComputing.pdf
SoftComputing.pdfSoftComputing.pdf
SoftComputing.pdfktosri
 
Ch 1 Introduction to AI.pdf
Ch 1 Introduction to AI.pdfCh 1 Introduction to AI.pdf
Ch 1 Introduction to AI.pdfKrishnaMadala1
 
Artificial intelligence intro cp 1
Artificial intelligence  intro  cp  1Artificial intelligence  intro  cp  1
Artificial intelligence intro cp 1M S Prasad
 
Introduction to Statistical Machine Learning
Introduction to Statistical Machine LearningIntroduction to Statistical Machine Learning
Introduction to Statistical Machine Learningmahutte
 
Brief Tour of Machine Learning
Brief Tour of Machine LearningBrief Tour of Machine Learning
Brief Tour of Machine Learningbutest
 
nncollovcapaldo2013-131220052427-phpapp01.pdf
nncollovcapaldo2013-131220052427-phpapp01.pdfnncollovcapaldo2013-131220052427-phpapp01.pdf
nncollovcapaldo2013-131220052427-phpapp01.pdfGayathriRHICETCSESTA
 
nncollovcapaldo2013-131220052427-phpapp01.pdf
nncollovcapaldo2013-131220052427-phpapp01.pdfnncollovcapaldo2013-131220052427-phpapp01.pdf
nncollovcapaldo2013-131220052427-phpapp01.pdfGayathriRHICETCSESTA
 
Learning
LearningLearning
Learningbutest
 
Fuzzy mathematics:An application oriented introduction
Fuzzy mathematics:An application oriented introductionFuzzy mathematics:An application oriented introduction
Fuzzy mathematics:An application oriented introductionNagasuri Bala Venkateswarlu
 
Artificial Intelligent introduction or history
Artificial Intelligent introduction or historyArtificial Intelligent introduction or history
Artificial Intelligent introduction or historyArslan Sattar
 
Ap think lang ss
Ap think lang ssAp think lang ss
Ap think lang ssMrAguiar
 

Similar a Foundations of Machine Learning (20)

Foundations of Induction
Foundations of InductionFoundations of Induction
Foundations of Induction
 
Foundations of Intelligence Agents
Foundations of Intelligence AgentsFoundations of Intelligence Agents
Foundations of Intelligence Agents
 
Module-1.1.pdf of aiml engineering mod 1
Module-1.1.pdf of aiml engineering mod 1Module-1.1.pdf of aiml engineering mod 1
Module-1.1.pdf of aiml engineering mod 1
 
introduction.pptx
introduction.pptxintroduction.pptx
introduction.pptx
 
Introduction.doc
Introduction.docIntroduction.doc
Introduction.doc
 
Some Take-Home Message about Machine Learning
Some Take-Home Message about Machine LearningSome Take-Home Message about Machine Learning
Some Take-Home Message about Machine Learning
 
SoftComputing.pdf
SoftComputing.pdfSoftComputing.pdf
SoftComputing.pdf
 
Ch 1 Introduction to AI.pdf
Ch 1 Introduction to AI.pdfCh 1 Introduction to AI.pdf
Ch 1 Introduction to AI.pdf
 
Artificial intelligence intro cp 1
Artificial intelligence  intro  cp  1Artificial intelligence  intro  cp  1
Artificial intelligence intro cp 1
 
Introduction to Statistical Machine Learning
Introduction to Statistical Machine LearningIntroduction to Statistical Machine Learning
Introduction to Statistical Machine Learning
 
Brief Tour of Machine Learning
Brief Tour of Machine LearningBrief Tour of Machine Learning
Brief Tour of Machine Learning
 
nncollovcapaldo2013-131220052427-phpapp01.pdf
nncollovcapaldo2013-131220052427-phpapp01.pdfnncollovcapaldo2013-131220052427-phpapp01.pdf
nncollovcapaldo2013-131220052427-phpapp01.pdf
 
nncollovcapaldo2013-131220052427-phpapp01.pdf
nncollovcapaldo2013-131220052427-phpapp01.pdfnncollovcapaldo2013-131220052427-phpapp01.pdf
nncollovcapaldo2013-131220052427-phpapp01.pdf
 
Learning
LearningLearning
Learning
 
Fuzzy mathematics:An application oriented introduction
Fuzzy mathematics:An application oriented introductionFuzzy mathematics:An application oriented introduction
Fuzzy mathematics:An application oriented introduction
 
scientific.ppt
scientific.pptscientific.ppt
scientific.ppt
 
artficial intelligence
artficial intelligenceartficial intelligence
artficial intelligence
 
Lec-1.pdf
Lec-1.pdfLec-1.pdf
Lec-1.pdf
 
Artificial Intelligent introduction or history
Artificial Intelligent introduction or historyArtificial Intelligent introduction or history
Artificial Intelligent introduction or history
 
Ap think lang ss
Ap think lang ssAp think lang ss
Ap think lang ss
 

Último

Full Stack Web Development Course for Beginners
Full Stack Web Development Course  for BeginnersFull Stack Web Development Course  for Beginners
Full Stack Web Development Course for BeginnersSabitha Banu
 
Keynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designKeynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designMIPLM
 
ACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdfACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdfSpandanaRallapalli
 
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...JhezDiaz1
 
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfLike-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfMr Bounab Samir
 
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdfAMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdfphamnguyenenglishnb
 
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)lakshayb543
 
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptxMULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptxAnupkumar Sharma
 
4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptx4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptxmary850239
 
Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17Celine George
 
Earth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatEarth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatYousafMalik24
 
How to Add Barcode on PDF Report in Odoo 17
How to Add Barcode on PDF Report in Odoo 17How to Add Barcode on PDF Report in Odoo 17
How to Add Barcode on PDF Report in Odoo 17Celine George
 
Science 7 Quarter 4 Module 2: Natural Resources.pptx
Science 7 Quarter 4 Module 2: Natural Resources.pptxScience 7 Quarter 4 Module 2: Natural Resources.pptx
Science 7 Quarter 4 Module 2: Natural Resources.pptxMaryGraceBautista27
 
Karra SKD Conference Presentation Revised.pptx
Karra SKD Conference Presentation Revised.pptxKarra SKD Conference Presentation Revised.pptx
Karra SKD Conference Presentation Revised.pptxAshokKarra1
 
Grade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptxGrade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptxChelloAnnAsuncion2
 
Roles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceRoles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceSamikshaHamane
 

Último (20)

YOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptx
YOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptxYOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptx
YOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptx
 
Full Stack Web Development Course for Beginners
Full Stack Web Development Course  for BeginnersFull Stack Web Development Course  for Beginners
Full Stack Web Development Course for Beginners
 
OS-operating systems- ch04 (Threads) ...
OS-operating systems- ch04 (Threads) ...OS-operating systems- ch04 (Threads) ...
OS-operating systems- ch04 (Threads) ...
 
Keynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designKeynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-design
 
ACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdfACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdf
 
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
 
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfLike-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
 
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdfAMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
 
YOUVE_GOT_EMAIL_PRELIMS_EL_DORADO_2024.pptx
YOUVE_GOT_EMAIL_PRELIMS_EL_DORADO_2024.pptxYOUVE_GOT_EMAIL_PRELIMS_EL_DORADO_2024.pptx
YOUVE_GOT_EMAIL_PRELIMS_EL_DORADO_2024.pptx
 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
 
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptxMULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
 
4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptx4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptx
 
Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17
 
Earth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatEarth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice great
 
How to Add Barcode on PDF Report in Odoo 17
How to Add Barcode on PDF Report in Odoo 17How to Add Barcode on PDF Report in Odoo 17
How to Add Barcode on PDF Report in Odoo 17
 
Science 7 Quarter 4 Module 2: Natural Resources.pptx
Science 7 Quarter 4 Module 2: Natural Resources.pptxScience 7 Quarter 4 Module 2: Natural Resources.pptx
Science 7 Quarter 4 Module 2: Natural Resources.pptx
 
Karra SKD Conference Presentation Revised.pptx
Karra SKD Conference Presentation Revised.pptxKarra SKD Conference Presentation Revised.pptx
Karra SKD Conference Presentation Revised.pptx
 
Grade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptxGrade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptx
 
Roles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceRoles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in Pharmacovigilance
 

Foundations of Machine Learning

  • 1. Marcus Hutter -1- Universal Induction & Intelligence Foundations of Machine Learning Marcus Hutter Canberra, ACT, 0200, Australia http://www.hutter1.net/ ANU RSISE NICTA Machine Learning Summer School MLSS-2008, 2 – 15 March, Kioloa
  • 2. Marcus Hutter -2- Universal Induction & Intelligence Overview • Setup: Given (non)iid data D = (x1 , ..., xn ), predict xn+1 • Ultimate goal is to maximize profit or minimize loss • Consider Models/Hypothesis Hi ∈ M • Max.Likelihood: Hbest = arg maxi p(D|Hi ) (overfits if M large) • Bayes: Posterior probability of Hi is p(Hi |D) ∝ p(D|Hi )p(Hi ) • Bayes needs prior(Hi ) • Occam+Epicurus: High prior for simple models. • Kolmogorov/Solomonoff: Quantification of simplicity/complexity • Bayes works if D is sampled from Htrue ∈ M • Universal AI = Universal Induction + Sequential Decision Theory
  • 3. Marcus Hutter -3- Universal Induction & Intelligence Abstract Machine learning is concerned with developing algorithms that learn from experience, build models of the environment from the acquired knowledge, and use these models for prediction. Machine learning is usually taught as a bunch of methods that can solve a bunch of problems (see my Introduction to SML last week). The following tutorial takes a step back and asks about the foundations of machine learning, in particular the (philosophical) problem of inductive inference, (Bayesian) statistics, and artificial intelligence. The tutorial concentrates on principled, unified, and exact methods.
  • 4. Marcus Hutter -4- Universal Induction & Intelligence Table of Contents • Overview • Philosophical Issues • Bayesian Sequence Prediction • Universal Inductive Inference • The Universal Similarity Metric • Universal Artificial Intelligence • Wrap Up • Literature
  • 5. Marcus Hutter -5- Universal Induction & Intelligence Philosophical Issues: Contents • Philosophical Problems • On the Foundations of Machine Learning • Example 1: Probability of Sunrise Tomorrow • Example 2: Digits of a Computable Number • Example 3: Number Sequences • Occam’s Razor to the Rescue • Grue Emerald and Confirmation Paradoxes • What this Tutorial is (Not) About • Sequential/Online Prediction – Setup
  • 6. Marcus Hutter -6- Universal Induction & Intelligence Philosophical Issues: Abstract I start by considering the philosophical problems concerning machine learning in general and induction in particular. I illustrate the problems and their intuitive solution on various (classical) induction examples. The common principle to their solution is Occam’s simplicity principle. Based on Occam’s and Epicurus’ principle, Bayesian probability theory, and Turing’s universal machine, Solomonoff developed a formal theory of induction. I describe the sequential/online setup considered in this tutorial and place it into the wider machine learning context.
  • 7. Marcus Hutter -7- Universal Induction & Intelligence Philosophical Problems • Does inductive inference work? Why? How? • How to choose the model class? • How to choose the prior? • How to make optimal decisions in unknown environments? • What is intelligence?
  • 8. Marcus Hutter -8- Universal Induction & Intelligence On the Foundations of Machine Learning • Example: Algorithm/complexity theory: The goal is to find fast algorithms solving problems and to show lower bounds on their computation time. Everything is rigorously defined: algorithm, Turing machine, problem classes, computation time, ... • Most disciplines start with an informal way of attacking a subject. With time they get more and more formalized often to a point where they are completely rigorous. Examples: set theory, logical reasoning, proof theory, probability theory, infinitesimal calculus, energy, temperature, quantum field theory, ... • Machine learning: Tries to build and understand systems that learn from past data, make good prediction, are able to generalize, act intelligently, ... Many terms are only vaguely defined or there are many alternate definitions.
  • 9. Marcus Hutter -9- Universal Induction & Intelligence Example 1: Probability of Sunrise Tomorrow What is the probability p(1|1d ) that the sun will rise tomorrow? (d = past # days sun rose, 1 =sun rises. 0 = sun will not rise) • p is undefined, because there has never been an experiment that tested the existence of the sun tomorrow (ref. class problem). • The p = 1, because the sun rose in all past experiments. • p = 1 − , where is the proportion of stars that explode per day. d+1 • p= d+2 , which is Laplace rule derived from Bayes rule. • Derive p from the type, age, size and temperature of the sun, even though we never observed another star with those exact properties. Conclusion: We predict that the sun will rise tomorrow with high probability independent of the justification.
  • 10. Marcus Hutter - 10 - Universal Induction & Intelligence Example 2: Digits of a Computable Number • Extend 14159265358979323846264338327950288419716939937? • Looks random?! • Frequency estimate: n = length of sequence. ki = number of i occured i =⇒ Probability of next digit being i is n . Asymptotically i 1 n → 10 (seems to be) true. • But we have the strong feeling that (i.e. with high probability) the next digit will be 5 because the previous digits were the expansion of π. • Conclusion: We prefer answer 5, since we see more structure in the sequence than just random digits.
  • 11. Marcus Hutter - 11 - Universal Induction & Intelligence Example 3: Number Sequences Sequence: x1 , x2 , x3 , x4 , x5 , ... 1, 2, 3, 4, ?, ... • x5 = 5, since xi = i for i = 1..4. • x5 = 29, since xi = i4 − 10i3 + 35i2 − 49i + 24. Conclusion: We prefer 5, since linear relation involves less arbitrary parameters than 4th-order polynomial. Sequence: 2,3,5,7,11,13,17,19,23,29,31,37,41,43,47,53,59,? • 61, since this is the next prime • 60, since this is the order of the next simple group Conclusion: We prefer answer 61, since primes are a more familiar concept than simple groups. On-Line Encyclopedia of Integer Sequences: http://www.research.att.com/∼njas/sequences/
  • 12. Marcus Hutter - 12 - Universal Induction & Intelligence Occam’s Razor to the Rescue • Is there a unique principle which allows us to formally arrive at a prediction which - coincides (always?) with our intuitive guess -or- even better, - which is (in some sense) most likely the best or correct answer? • Yes! Occam’s razor: Use the simplest explanation consistent with past data (and use it for prediction). • Works! For examples presented and for many more. • Actually Occam’s razor can serve as a foundation of machine learning in general, and is even a fundamental principle (or maybe even the mere definition) of science. • Problem: Not a formal/mathematical objective principle. What is simple for one may be complicated for another.
  • 13. Marcus Hutter - 13 - Universal Induction & Intelligence Grue Emerald Paradox Hypothesis 1: All emeralds are green. Hypothesis 2: All emeralds found till y2010 are green, thereafter all emeralds are blue. • Which hypothesis is more plausible? H1! Justification? • Occam’s razor: take simplest hypothesis consistent with data. is the most important principle in machine learning and science.
  • 14. Marcus Hutter - 14 - Universal Induction & Intelligence Confirmation Paradox (i) R → B is confirmed by an R-instance with property B (ii) ¬B → ¬R is confirmed by a ¬B-instance with property ¬R. (iii) Since R → B and ¬B → ¬R are logically equivalent, R → B is also confirmed by a ¬B-instance with property ¬R. Example: Hypothesis (o): All ravens are black (R=Raven, B=Black). (i) observing a Black Raven confirms Hypothesis (o). (iii) observing a White Sock also confirms that all Ravens are Black, since a White Sock is a non-Raven which is non-Black. This conclusion sounds absurd.
  • 15. Marcus Hutter - 15 - Universal Induction & Intelligence Problem Setup • Induction problems can be phrased as sequence prediction tasks. • Classification is a special case of sequence prediction. (With some tricks the other direction is also true) • This tutorial focusses on maximizing profit (minimizing loss). We’re not (primarily) interested in finding a (true/predictive/causal) model. • Separating noise from data is not necessary in this setting!
  • 16. Marcus Hutter - 16 - Universal Induction & Intelligence What This Tutorial is (Not) About Dichotomies in Artificial Intelligence & Machine Learning scope of my tutorial ⇔ scope of other tutorials (machine) learning ⇔ (GOFAI) knowledge-based statistical ⇔ logic-based decision ⇔ prediction ⇔ induction ⇔ action classification ⇔ regression sequential / non-iid ⇔ independent identically distributed online learning ⇔ offline/batch learning passive prediction ⇔ active learning Bayes ⇔ MDL ⇔ Expert ⇔ Frequentist uninformed / universal ⇔ informed / problem-specific conceptual/mathematical issues ⇔ computational issues exact/principled ⇔ heuristic supervised learning ⇔ unsupervised ⇔ RL learning exploitation ⇔ exploration
  • 17. Marcus Hutter - 17 - Universal Induction & Intelligence Sequential/Online Prediction – Setup In sequential or online prediction, for times t = 1, 2, 3, ..., p our predictor p makes a prediction yt ∈ Y based on past observations x1 , ..., xt−1 . p Thereafter xt ∈ X is observed and p suffers Loss(xt , yt ). The goal is to design predictors with small total loss or cumulative T p Loss1:T (p) := t=1 Loss(xt , yt ). Applications are abundant, e.g. weather or stock market forecasting. Loss(x, y) X = {sunny , rainy} Example: umbrella 0.1 0.3 Y= sunglasses 0.0 1.0 Setup also includes: Classification and Regression problems.
  • 18. Marcus Hutter - 18 - Universal Induction & Intelligence Bayesian Sequence Prediction: Contents • Uncertainty and Probability • Frequency Interpretation: Counting • Objective Interpretation: Uncertain Events • Subjective Interpretation: Degrees of Belief • Bayes’ and Laplace’s Rules • Envelope Paradox • The Bayes-mixture distribution • Relative Entropy and Bound • Predictive Convergence • Sequential Decisions and Loss Bounds • Generalization: Continuous Probability Classes • Summary
  • 19. Marcus Hutter - 19 - Universal Induction & Intelligence Bayesian Sequence Prediction: Abstract The aim of probability theory is to describe uncertainty. There are various sources and interpretations of uncertainty. I compare the frequency, objective, and subjective probabilities, and show that they all respect the same rules, and derive Bayes’ and Laplace’s famous and fundamental rules. Then I concentrate on general sequence prediction tasks. I define the Bayes mixture distribution and show that the posterior converges rapidly to the true posterior by exploiting some bounds on the relative entropy. Finally I show that the mixture predictor is also optimal in a decision-theoretic sense w.r.t. any bounded loss function.
  • 20. Marcus Hutter - 20 - Universal Induction & Intelligence Uncertainty and Probability The aim of probability theory is to describe uncertainty. Sources/interpretations for uncertainty: • Frequentist: probabilities are relative frequencies. (e.g. the relative frequency of tossing head.) • Objectivist: probabilities are real aspects of the world. (e.g. the probability that some atom decays in the next hour) • Subjectivist: probabilities describe an agent’s degree of belief. (e.g. it is (im)plausible that extraterrestrians exist)
  • 21. Marcus Hutter - 21 - Universal Induction & Intelligence Frequency Interpretation: Counting • The frequentist interprets probabilities as relative frequencies. • If in a sequence of n independent identically distributed (i.i.d.) experiments (trials) an event occurs k(n) times, the relative frequency of the event is k(n)/n. • The limit limn→∞ k(n)/n is defined as the probability of the event. • For instance, the probability of the event head in a sequence of repeatedly tossing a fair coin is 1 . 2 • The frequentist position is the easiest to grasp, but it has several shortcomings: • Problems: definition circular, limited to i.i.d, reference class problem.
  • 22. Marcus Hutter - 22 - Universal Induction & Intelligence Objective Interpretation: Uncertain Events • For the objectivist probabilities are real aspects of the world. • The outcome of an observation or an experiment is not deterministic, but involves physical random processes. • The set Ω of all possible outcomes is called the sample space. • It is said that an event E ⊂ Ω occurred if the outcome is in E. • In the case of i.i.d. experiments the probabilities p assigned to events E should be interpretable as limiting frequencies, but the application is not limited to this case. • (Some) probability axioms: p(Ω) = 1 and p({}) = 0 and 0 ≤ p(E) ≤ 1. p(A ∪ B) = p(A) + p(B) − p(A ∩ B). p(B|A) = p(A∩B) is the probability of B given event A occurred. p(A)
  • 23. Marcus Hutter - 23 - Universal Induction & Intelligence Subjective Interpretation: Degrees of Belief • The subjectivist uses probabilities to characterize an agent’s degree of belief in something, rather than to characterize physical random processes. • This is the most relevant interpretation of probabilities in AI. • We define the plausibility of an event as the degree of belief in the event, or the subjective probability of the event. • It is natural to assume that plausibilities/beliefs Bel(·|·) can be repr. by real numbers, that the rules qualitatively correspond to common sense, and that the rules are mathematically consistent. ⇒ • Cox’s theorem: Bel(·|A) is isomorphic to a probability function p(·|·) that satisfies the axioms of (objective) probabilities. • Conclusion: Beliefs follow the same rules as probabilities
  • 24. Marcus Hutter - 24 - Universal Induction & Intelligence Bayes’ Famous Rule Let D be some possible data (i.e. D is event with p(D) > 0) and {Hi }i∈I be a countable complete class of mutually exclusive hypotheses (i.e. Hi are events with Hi ∩ Hj = {} ∀i = j and i∈I Hi = Ω). Given: p(Hi ) = a priori plausibility of hypotheses Hi (subj. prob.) Given: p(D|Hi ) = likelihood of data D under hypothesis Hi (obj. prob.) Goal: p(Hi |D) = a posteriori plausibility of hypothesis Hi (subj. prob.) p(D|Hi )p(Hi ) Solution: p(Hi |D) = i∈I p(D|Hi )p(Hi ) Proof: From the definition of conditional probability and p(Hi |...) = 1 ⇒ p(D|Hi )p(Hi ) = p(Hi |D)p(D) = p(D) i∈I i∈I i∈I
  • 25. Marcus Hutter - 25 - Universal Induction & Intelligence Example: Bayes’ and Laplace’s Rule Assume data is generated by a biased coin with head probability θ, i.e. Hθ :=Bernoulli(θ) with θ ∈ Θ := [0, 1]. Finite sequence: x = x1 x2 ...xn with n1 ones and n0 zeros. Sample infinite sequence: ω ∈ Ω = {0, 1}∞ Basic event: Γx = {ω : ω1 = x1 , ..., ωn = xn } = set of all sequences starting with x. Data likelihood: pθ (x) := p(Γx |Hθ ) = θn1 (1 − θ)n0 . Bayes (1763): Uniform prior plausibility: p(θ) := p(Hθ ) = 1 R1 P ( 0 p(θ) dθ = 1 instead i∈I p(Hi ) = 1) 1 1 n1 !n0 ! Evidence: p(x) = 0 pθ (x)p(θ) dθ = 0 θn1 (1 − θ)n0 dθ = (n0 +n1 +1)!
  • 26. Marcus Hutter - 26 - Universal Induction & Intelligence Example: Bayes’ and Laplace’s Rule Bayes: Posterior plausibility of θ after seeing x is: p(x|θ)p(θ) (n+1)! n1 p(θ|x) = = θ (1−θ)n0 p(x) n1 !n0 ! . Laplace: What is the probability of seeing 1 after having observed x? p(x1) n1 +1 p(xn+1 = 1|x1 ...xn ) = = p(x) n+2 Laplace believed that the sun had risen for 5000 years = 1’826’213 days, 1 so he concluded that the probability of doomsday tomorrow is 1826215 .
  • 27. Marcus Hutter - 27 - Universal Induction & Intelligence Exercise: Envelope Paradox • I offer you two closed envelopes, one of them contains twice the amount of money than the other. You are allowed to pick one and open it. Now you have two options. Keep the money or decide for the other envelope (which could double or half your gain). • Symmetry argument: It doesn’t matter whether you switch, the expected gain is the same. • Refutation: With probability p = 1/2, the other envelope contains twice/half the amount, i.e. if you switch your expected gain increases by a factor 1.25=(1/2)*2+(1/2)*(1/2). • Present a Bayesian solution.
  • 28. Marcus Hutter - 28 - Universal Induction & Intelligence The Bayes-Mixture Distribution ξ • Assumption: The true (objective) environment µ is unknown. • Bayesian approach: Replace true probability distribution µ by a Bayes-mixture ξ. • Assumption: We know that the true environment µ is contained in some known countable (in)finite set M of environments. • The Bayes-mixture ξ is defined as ξ(x1:m ) := wν ν(x1:m ) with wν = 1, wν > 0 ∀ν ν∈M ν∈M • The weights wν may be interpreted as the prior degree of belief that the true environment is ν, or k ν = ln wν as a complexity penalty −1 (prefix code length) of environment ν. • Then ξ(x1:m ) could be interpreted as the prior subjective belief probability in observing x1:m .
  • 29. Marcus Hutter - 29 - Universal Induction & Intelligence Convergence and Decisions Goal: Given seq. x1:t−1 ≡ x<t ≡ x1 x2 ...xt−1 , predict continuation xt . Expectation w.r.t. µ: E[f (ω1:n )] := x∈X n µ(x)f (x) KL-divergence: Dn (µ||ξ) := E[ln µ(ω1:n ) ] ≤ ln wµ ∀n ξ(ω 1:n ) −1 Hellinger distance: ht (ω<t ) := a∈X ( ξ(a|ω<t ) − µ(a|ω<t ))2 ∞ −1 Rapid convergence: t=1 E[ht (ω<t )] ≤ D∞ ≤ ln wµ < ∞ implies ξ(xt |ω<t ) → µ(xt |ω<t ), i.e. ξ is a good substitute for unknown µ. Bayesian decisions: Bayes-optimal predictor Λξ suffers instantaneous Λ loss lt ξ ∈ [0, 1] at t only slightly larger than the µ-optimal predictor Λµ : ∞ ∞ t=1 E[( lt ξ − lt µ )2 ] ≤ t=1 2E[ht ] < ∞ implies rapid lt ξ → lt µ Λ Λ Λ . Λ Pareto-optimality of Λξ : Every predictor with loss smaller than Λξ in some environment µ ∈ M must be worse in another environment.
  • 30. Marcus Hutter - 30 - Universal Induction & Intelligence Generalization: Continuous Classes M In statistical parameter estimation one often has a continuous hypothesis class (e.g. a Bernoulli(θ) process with unknown θ ∈ [0, 1]). M := {νθ : θ ∈ I d }, R ξ(x) := dθ w(θ) νθ (x), dθ w(θ) = 1 I d R I d R Under weak regularity conditions [CB90,H’03]: d n Theorem: Dn (µ||ξ) ≤ ln w(µ)−1 + 2 ln 2π + O(1) where O(1) depends on the local curvature (parametric complexity) of ln νθ , and is independent n for many reasonable classes, including all stationary (k th -order) finite-state Markov processes (k = 0 is i.i.d.). Dn ∝ log(n) = o(n) still implies excellent prediction and decision for most n. [RH’07]
  • 31. Marcus Hutter - 31 - Universal Induction & Intelligence Bayesian Sequence Prediction: Summary • The aim of probability theory is to describe uncertainty. • Various sources and interpretations of uncertainty: frequency, objective, and subjective probabilities. • They all respect the same rules. • General sequence prediction: Use known (subj.) Bayes mixture ξ = ν∈M wν ν in place of unknown (obj.) true distribution µ. • Bound on the relative entropy between ξ and µ. ⇒ posterior of ξ converges rapidly to the true posterior µ. • ξ is also optimal in a decision-theoretic sense w.r.t. any bounded loss function. • No structural assumptions on M and ν ∈ M.
  • 32. Marcus Hutter - 32 - Universal Induction & Intelligence Universal Inductive Inferences: Contents • Foundations of Universal Induction • Bayesian Sequence Prediction and Confirmation • Fast Convergence • How to Choose the Prior – Universal • Kolmogorov Complexity • How to Choose the Model Class – Universal • Universal is Better than Continuous Class • Summary / Outlook / Literature
  • 33. Marcus Hutter - 33 - Universal Induction & Intelligence Universal Inductive Inferences: Abstract Solomonoff completed the Bayesian framework by providing a rigorous, unique, formal, and universal choice for the model class and the prior. I will discuss in breadth how and in which sense universal (non-i.i.d.) sequence prediction solves various (philosophical) problems of traditional Bayesian sequence prediction. I show that Solomonoff’s model possesses many desirable properties: Fast convergence, and in contrast to most classical continuous prior densities has no zero p(oste)rior problem, i.e. can confirm universal hypotheses, is reparametrization and regrouping invariant, and avoids the old-evidence and updating problem. It even performs well (actually better) in non-computable environments.
  • 34. Marcus Hutter - 34 - Universal Induction & Intelligence Induction Examples Sequence prediction: Predict weather/stock-quote/... tomorrow, based on past sequence. Continue IQ test sequence like 1,4,9,16,? Classification: Predict whether email is spam. Classification can be reduced to sequence prediction. Hypothesis testing/identification: Does treatment X cure cancer? Do observations of white swans confirm that all ravens are black? These are instances of the important problem of inductive inference or time-series forecasting or sequence prediction. Problem: Finding prediction rules for every particular (new) problem is possible but cumbersome and prone to disagreement or contradiction. Goal: A single, formal, general, complete theory for prediction. Beyond induction: active/reward learning, fct. optimization, game theory.
  • 35. Marcus Hutter - 35 - Universal Induction & Intelligence Foundations of Universal Induction Ockhams’ razor (simplicity) principle Entities should not be multiplied beyond necessity. Epicurus’ principle of multiple explanations If more than one theory is consistent with the observations, keep all theories. Bayes’ rule for conditional probabilities Given the prior belief/probability one can predict all future prob- abilities. Turing’s universal machine Everything computable by a human using a fixed procedure can also be computed by a (universal) Turing machine. Kolmogorov’s complexity The complexity or information content of an object is the length of its shortest description on a universal Turing machine. Solomonoff’s universal prior=Ockham+Epicurus+Bayes+Turing Solves the question of how to choose the prior if nothing is known. ⇒ universal induction, formal Occam, AIT,MML,MDL,SRM,...
  • 36. Marcus Hutter - 36 - Universal Induction & Intelligence Bayesian Sequence Prediction and Confirmation • Assumption: Sequence ω ∈ X ∞ is sampled from the “true” probability measure µ, i.e. µ(x) := P[x|µ] is the µ-probability that ω starts with x ∈ X n . • Model class: We assume that µ is unknown but known to belong to a countable class of environments=models=measures M = {ν1 , ν2 , ...}. [no i.i.d./ergodic/stationary assumption] • Hypothesis class: {Hν : ν ∈ M} forms a mutually exclusive and complete class of hypotheses. • Prior: wν := P[Hν ] is our prior belief in Hν ⇒ Evidence: ξ(x) := P[x] = ν∈M P[x|Hν ]P[Hν ] = ν wν ν(x) must be our (prior) belief in x. P[x|Hν ]P[Hν ] ⇒ Posterior: wν (x) := P[Hν |x] = P[x] is our posterior belief in ν (Bayes’ rule).
  • 37. Marcus Hutter - 37 - Universal Induction & Intelligence How to Choose the Prior? • Subjective: quantifying personal prior belief (not further discussed) • Objective: based on rational principles (agreed on by everyone) 1 • Indifference or symmetry principle: Choose wν = |M| for finite M. • Jeffreys or Bernardo’s prior: Analogue for compact parametric spaces M. • Problem: The principles typically provide good objective priors for small discrete or compact spaces, but not for “large” model classes like countably infinite, non-compact, and non-parametric M. • Solution: Occam favors simplicity ⇒ Assign high (low) prior to simple (complex) hypotheses. • Problem: Quantitative and universal measure of simplicity/complexity.
  • 38. Marcus Hutter - 38 - Universal Induction & Intelligence Kolmogorov Complexity K(x) K. of string x is the length of the shortest (prefix) program producing x: K(x) := minp {l(p) : U (p) = x}, U = universal TM For non-string objects o (like numbers and functions) we define K(o) := K( o ), where o ∈ X ∗ is some standard code for o. + Simple strings like 000...0 have small K, irregular (e.g. random) strings have large K. • The definition is nearly independent of the choice of U . + K satisfies most properties an information measure should satisfy. + K shares many properties with Shannon entropy but is superior. − K(x) is not computable, but only semi-computable from above. K is an excellent universal complexity measure, Fazit: suitable for quantifying Occam’s razor.
  • 39. Marcus Hutter - 39 - Universal Induction & Intelligence Schematic Graph of Kolmogorov Complexity Although K(x) is incomputable, we can draw a schematic graph
  • 40. Marcus Hutter - 40 - Universal Induction & Intelligence The Universal Prior • Quantify the complexity of an environment ν or hypothesis Hν by its Kolmogorov complexity K(ν). • Universal prior: wν = wν := 2−K(ν) is a decreasing function in U the model’s complexity, and sums to (less than) one. ⇒ Dn ≤ K(µ) ln 2, i.e. the number of ε-deviations of ξ from µ or lΛξ from lΛµ is proportional to the complexity of the environment. • No other semi-computable prior leads to better prediction (bounds). • For continuous M, we can assign a (proper) universal prior (not density) wθ = 2−K(θ) > 0 for computable θ, and 0 for uncomp. θ. U U • This effectively reduces M to a discrete class {νθ ∈ M : wθ > 0} which is typically dense in M. • This prior has many advantages over the classical prior (densities).
  • 41. Marcus Hutter - 41 - Universal Induction & Intelligence Universal Choice of Class M • The larger M the less restrictive is the assumption µ ∈ M. • The class MU of all (semi)computable (semi)measures, although only countable, is pretty large, since it includes all valid physics theories. Further, ξU is semi-computable [ZL70]. • Solomonoff’s universal prior M (x) := probability that the output of a universal TM U with random input starts with x. • Formally: M (x) := p : U (p)=x∗ 2− (p) where the sum is over all (minimal) programs p for which U outputs a string starting with x. • M may be regarded as a 2− (p) -weighted mixture over all deterministic environments νp . (νp (x) = 1 if U (p) = x∗ and 0 else) • M (x) coincides with ξU (x) within an irrelevant multiplicative constant.
  • 42. Marcus Hutter - 42 - Universal Induction & Intelligence Universal is better than Continuous Class&Prior • Problem of zero prior / confirmation of universal hypotheses: ≡ 0 in Bayes-Laplace model P[All ravens black|n black ravens] f ast U −→ 1 for universal prior wθ • Reparametrization and regrouping invariance: wθ = 2−K(θ) always U exists and is invariant w.r.t. all computable reparametrizations f . (Jeffrey prior only w.r.t. bijections, and does not always exist) • The Problem of Old Evidence: No risk of biasing the prior towards U past data, since wθ is fixed and independent of M. • The Problem of New Theories: Updating of M is not necessary, since MU includes already all. • M predicts better than all other mixture predictors based on any (continuous or discrete) model class and prior, even in non-computable environments.
  • 43. Marcus Hutter - 43 - Universal Induction & Intelligence Convergence and× Loss Bounds ∞ • Total (loss) bounds: n=1 E[hn ] < K(µ) ln 2, where ht (ω<t ) := a∈X ( ξ(a|ω<t ) − µ(a|ω<t ))2 . • Instantaneous i.i.d. bounds: For i.i.d. M with continuous, discrete, and universal prior, respectively: × 1 × 1 1 −1 −1 E[hn ] < n ln w(µ) and E[hn ] < n ln wµ = n K(µ) ln 2. • Bounds for computable environments: Rapidly M (xt |x<t ) → 1 on every computable sequence x1:∞ (whichsoever, e.g. 1∞ or the digits of π or e), i.e. M quickly recognizes the structure of the sequence. • Weak instantaneous bounds: valid for all n and x1:n and xn = xn : ¯ × × 2−K(n) < M (¯n |x<n ) < 22K(x1:n ∗)−K(n) x × • Magic instance numbers: e.g. M (0|1n ) = 2−K(n) → 0, but spikes up for simple n. M is cautious at magic instance numbers n. • Future bounds / errors to come: If our past observations ω1:n contain a lot of information about µ, we make few errors in future: + ∞ t=n+1 E[ht |ω1:n ] < [K(µ|ω1:n )+K(n)] ln 2
  • 44. Marcus Hutter - 44 - Universal Induction & Intelligence Universal Inductive Inference: Summary Universal Solomonoff prediction solves/avoids/meliorates many problems of (Bayesian) induction. We discussed: + general total bounds for generic class, prior, and loss, + the Dn bound for continuous classes, + the problem of zero p(oste)rior & confirm. of universal hypotheses, + reparametrization and regrouping invariance, + the problem of old evidence and updating, + that M works even in non-computable environments, + how to incorporate prior knowledge,
  • 45. Marcus Hutter - 45 - Universal Induction & Intelligence The Universal Similarity Metric: Contents • Kolmogorov Complexity • The Universal Similarity Metric • Tree-Based Clustering • Genomics & Phylogeny: Mammals, SARS Virus & Others • Classification of Different File Types • Language Tree (Re)construction • Classify Music w.r.t. Composer • Further Applications • Summary
  • 46. Marcus Hutter - 46 - Universal Induction & Intelligence The Universal Similarity Metric: Abstract The MDL method has been studied from very concrete and highly tuned practical applications to general theoretical assertions. Sequence prediction is just one application of MDL. The MDL idea has also been used to define the so called information distance or universal similarity metric, measuring the similarity between two individual objects. I will present some very impressive recent clustering applications based on standard Lempel-Ziv or bzip2 compression, including a completely automatic reconstruction (a) of the evolutionary tree of 24 mammals based on complete mtDNA, and (b) of the classification tree of 52 languages based on the declaration of human rights and (c) others. Based on [Cilibrasi&Vitanyi’05]
  • 47. Marcus Hutter - 47 - Universal Induction & Intelligence Conditional Kolmogorov Complexity Question: When is object=string x similar to object=string y? Universal solution: x similar y ⇔ x can be easily (re)constructed from y ⇔ Kolmogorov complexity K(x|y) := min{ (p) : U (p, y) = x} is small Examples: + 1) x is very similar to itself (K(x|x) = 0) + 2) A processed x is similar to x (K(f (x)|x) = 0 if K(f ) = O(1)). e.g. doubling, reverting, inverting, encrypting, partially deleting x. 3) A random string is with high probability not similar to any other string (K(random|y) =length(random)). The problem with K(x|y) as similarity=distance measure is that it is neither symmetric nor normalized nor computable.
  • 48. Marcus Hutter - 48 - Universal Induction & Intelligence The Universal Similarity Metric • Symmetrization and normalization leads to a/the universal metric d: max{K(x|y), K(y|x)} 0 ≤ d(x, y) := ≤ 1 max{K(x), K(y)} • Every effective similarity between x and y is detected by d • Use K(x|y) ≈ K(xy)−K(y) (coding T) and K(x) ≡ KU (x) ≈ KT (x) =⇒ computable approximation: Normalized compression distance: KT (xy) − min{KT (x), KT (y)} d(x, y) ≈ 1 max{KT (x), KT (y)} • For T choose Lempel-Ziv or gzip or bzip(2) (de)compressor in the applications below. • Theory: Lempel-Ziv compresses asymptotically better than any probabilistic finite state automaton predictor/compressor.
  • 49. Marcus Hutter - 49 - Universal Induction & Intelligence Tree-Based Clustering • If many objects x1 , ..., xn need to be compared, determine the similarity matrix Mij = d(xi , xj ) for 1 ≤ i, j ≤ n • Now cluster similar objects. • There are various clustering techniques. • Tree-based clustering: Create a tree connecting similar objects, • e.g. quartet method (for clustering)
  • 50. Marcus Hutter - 50 - Universal Induction & Intelligence Genomics & Phylogeny: Mammals Let x1 , ..., xn be mitochondrial genome sequences of different mammals: Partial distance matrix Mij using bzip2(?) Cat Echidna Gorilla ... BrownBear Chimpanzee FinWhale HouseMouse ... Carp Cow Gibbon Human ... BrownBear 0.002 0.943 0.887 0.935 0.906 0.944 0.915 0.939 0.940 0.934 0.930 ... Carp 0.943 0.006 0.946 0.954 0.947 0.955 0.952 0.951 0.957 0.956 0.946 ... Cat 0.887 0.946 0.003 0.926 0.897 0.942 0.905 0.928 0.931 0.919 0.922 ... Chimpanzee 0.935 0.954 0.926 0.006 0.926 0.948 0.926 0.849 0.731 0.943 0.667 ... Cow 0.906 0.947 0.897 0.926 0.006 0.936 0.885 0.931 0.927 0.925 0.920 ... Echidna 0.944 0.955 0.942 0.948 0.936 0.005 0.936 0.947 0.947 0.941 0.939 ... FinbackWhale 0.915 0.952 0.905 0.926 0.885 0.936 0.005 0.930 0.931 0.933 0.922 ... Gibbon 0.939 0.951 0.928 0.849 0.931 0.947 0.930 0.005 0.859 0.948 0.844 ... Gorilla 0.940 0.957 0.931 0.731 0.927 0.947 0.931 0.859 0.006 0.944 0.737 ... HouseMouse 0.934 0.956 0.919 0.943 0.925 0.941 0.933 0.948 0.944 0.006 0.932 ... Human 0.930 0.946 0.922 0.667 0.920 0.939 0.922 0.844 0.737 0.932 0.005 ... ... ... ... ... ... ... ... ... ... ... ... ... ...
  • 51. Marcus Hutter - 51 - Universal Induction & Intelligence Genomics & Phylogeny: Mammals Evolutionary tree built from complete mammalian mtDNA of 24 species: Carp Cow BlueWhale FinbackWhale Cat Ferungulates BrownBear PolarBear GreySeal HarborSeal Eutheria Horse WhiteRhino Gibbon Gorilla Primates Human Chimpanzee PygmyChimp Orangutan SumatranOrangutan HouseMouse Eutheria - Rodents Rat Opossum Metatheria Wallaroo Echidna Prototheria Platypus
  • 52. Marcus Hutter - 52 - Universal Induction & Intelligence Genomics & Phylogeny: SARS Virus and Others • Clustering of SARS virus in relation to potential similar virii based on complete sequenced genome(s) using bzip2: • The relations are very similar to the definitive tree based on medical-macrobio-genomics analysis from biologists.
  • 53. Marcus Hutter - 53 - Universal Induction & Intelligence Genomics & Phylogeny: SARS Virus and Others SARSTOR2v120403 HumanCorona1 MurineHep11 RatSialCorona n8 AvianIB2 n10 n7 MurineHep2 AvianIB1 n2 n13 n0 n5 PRD1 n4 MeaslesSch n12 n3 n9 SIRV1 MeaslesMora n11 SIRV2 DuckAdeno1 n1 AvianAdeno1CELO n6 HumanAdeno40 BovineAdeno3
  • 54. Marcus Hutter - 54 - Universal Induction & Intelligence Classification of Different File Types Classification of files based on markedly different file types using bzip2 • Four mitochondrial gene sequences • Four excerpts from the novel “The Zeppelin’s Passenger” • Four MIDI files without further processing • Two Linux x86 ELF executables (the cp and rm commands) • Two compiled Java class files No features of any specific domain of application are used!
  • 55. Marcus Hutter - 55 - Universal Induction & Intelligence Classification of Different File Types GenesFoxC GenesRatD MusicHendrixB n10 GenesPolarBearB MusicHendrixA n0 n5 n13 GenesBlackBearA n3 MusicBergB n8 n2 MusicBergA ELFExecutableB n7 n12 ELFExecutableA n1 JavaClassB n6 n4 JavaClassA TextD n11 n9 TextC TextB TextA Perfect classification!
  • 56. Marcus Hutter - 56 - Universal Induction & Intelligence Language Tree (Re)construction • Let x1 , ..., xn be the “The Universal Declaration of Human Rights” in various languages 1, ..., n. • Distance matrix Mij based on gzip. Language tree constructed from Mij by the Fitch-Margoliash method [Li&al’03] • All main linguistic groups can be recognized (next slide)
  • 57. Basque [Spain] Hungarian [Hungary] Polish [Poland] Sorbian [Germany] SLAVIC Slovak [Slovakia] Czech [Czech Rep] Slovenian [Slovenia] Serbian [Serbia] Bosnian [Bosnia] Croatian [Croatia] Romani Balkan [East Europe] Albanian [Albany] BALTIC Lithuanian [Lithuania] Latvian [Latvia] ALTAIC Turkish [Turkey] Uzbek [Utzbekistan] Breton [France] Maltese [Malta] OccitanAuvergnat [France] Walloon [Belgique] English [UK] French [France] ROMANCE Asturian [Spain] Portuguese [Portugal] Spanish [Spain] Galician [Spain] Catalan [Spain] Occitan [France] Rhaeto Romance [Switzerland] Friulian [Italy] Italian [Italy] Sammarinese [Italy] Corsican [France] Sardinian [Italy] Romanian [Romania] Romani Vlach [Macedonia] CELTIC Welsh [UK] Scottish Gaelic [UK] Irish Gaelic [UK] German [Germany] Luxembourgish [Luxembourg] Frisian [Netherlands] Dutch [Netherlands] Afrikaans GERMANIC Swedish [Sweden] Norwegian Nynorsk [Norway] Danish [Denmark] Norwegian Bokmal [Norway] Faroese [Denmark] Icelandic [Iceland] UGROFINNIC Finnish [Finland] Estonian [Estonia]
  • 58. Marcus Hutter - 58 - Universal Induction & Intelligence Classify Music w.r.t. Composer Let m1 , ..., mn be pieces of music in MIDI format. Preprocessing the MIDI files: • Delete identifying information (composer, title, ...), instrument indicators, MIDI control signals, tempo variations, ... • Keep only note-on and note-off information. • A note, k ∈ Z half-tones above the average note is coded as a Z signed byte with value k. • The whole piece is quantized in 0.05 second intervals. • Tracks are sorted according to decreasing average volume, and then output in succession. Processed files x1 , ..., xn still sounded like the original.
  • 59. Marcus Hutter - 59 - Universal Induction & Intelligence Classify Music w.r.t. Composer 12 pieces of music: 4×Bach + 4×Chopin + 4×Debussy. Class. by bzip2 DebusBerg4 DebusBerg1 ChopPrel1 ChopPrel22 n7 n3 DebusBerg2 n6 ChopPrel24 n4 n2 n1 DebusBerg3 n9 ChopPrel15 n8 n5 n0 BachWTK2P1 BachWTK2F2 BachWTK2F1 BachWTK2P2 Perfect grouping of processed MIDI files w.r.t. composers.
  • 60. Marcus Hutter - 60 - Universal Induction & Intelligence Further Applications • Classification of Fungi • Optical character recognition • Classification of Galaxies • Clustering of novels w.r.t. authors • Larger data sets See [Cilibrasi&Vitanyi’03]
  • 61. Marcus Hutter - 61 - Universal Induction & Intelligence The Clustering Method: Summary • based on the universal similarity metric, • based on Kolmogorov complexity, • approximated by bzip2, • with the similarity matrix represented by tree, • approximated by the quartet method • leads to excellent classification in many domains.
  • 62. Marcus Hutter - 62 - Universal Induction & Intelligence Universal Rational Agents: Contents • Rational agents • Sequential decision theory • Reinforcement learning • Value function • Universal Bayes mixture and AIXI model • Self-optimizing policies • Pareto-optimality • Environmental Classes
  • 63. Marcus Hutter - 63 - Universal Induction & Intelligence Universal Rational Agents: Abstract Sequential decision theory formally solves the problem of rational agents in uncertain worlds if the true environmental prior probability distribution is known. Solomonoff’s theory of universal induction formally solves the problem of sequence prediction for unknown prior distribution. Here we combine both ideas and develop an elegant parameter-free theory of an optimal reinforcement learning agent embedded in an arbitrary unknown environment that possesses essentially all aspects of rational intelligence. The theory reduces all conceptual AI problems to pure computational ones. We give strong arguments that the resulting AIXI model is the most intelligent unbiased agent possible. Other discussed topics are relations between problem classes.
  • 64. Marcus Hutter - 64 - Universal Induction & Intelligence The Agent Model r1 | o1 r 2 | o2 r 3 | o3 r4 | o4 r 5 | o5 r 6 | o6 ... ¨¨ r ‰r ¨ r ¨ %¨ rr Agent Environ- work tape ... work tape ... p ment q €€ I € €€ €€ q y1 y2 y3 y4 y5 y6 ... Most if not all AI problems can be formulated within the agent framework
  • 65. Marcus Hutter - 65 - Universal Induction Intelligence Rational Agents in Deterministic Environments - p : X ∗ → Y ∗ is deterministic policy of the agent, p(xk ) = y1:k with xk ≡ x1 ...xk−1 . - q : Y ∗ → X ∗ is deterministic environment, q(y1:k ) = x1:k with y1:k ≡ y1 ...yk . - Input xk ≡ rk ok consists of a regular informative part ok and reward rk ∈ [0..rmax ]. pq - Value Vkm := rk + ... + rm , pq optimal policy pbest := arg maxp V1m , Lifespan or initial horizon m.
  • 66. Marcus Hutter - 66 - Universal Induction Intelligence Agents in Probabilistic Environments Given history y1:k xk , the probability that the environment leads to perception xk in cycle k is (by definition) σ(xk |y1:k xk ). Abbreviation (chain rule) σ(x1:m |y1:m ) = σ(x1 |y1 )·σ(x2 |y1:2 x1 )· ... ·σ(xm |y1:m xm ) The average value of policy p with horizon m in environment σ is defined as p 1 Vσ := m (r1 + ... +rm )σ(x1:m |y1:m )|y1:m =p(xm ) x1:m The goal of the agent should be to maximize the value.
  • 67. Marcus Hutter - 67 - Universal Induction Intelligence Optimal Policy and Value σ p p ∗ pσ The σ-optimal policy p := arg maxp Vσ maximizes Vσ ≤ Vσ := Vσ . Explicit expressions for the action yk in cycle k of the σ-optimal policy pσ and their value Vσ are ∗ yk = arg max max ... max (rk + ... +rm )·σ(xk:m |y1:m xk ), yk yk+1 ym xk xk+1 xm ∗ 1 Vσ = m max max ... max (r1 + ... +rm )·σ(x1:m |y1:m ). y1 y2 ym x1 x2 xm Keyword: Expectimax tree/algorithm.
  • 68. Marcus Hutter - 68 - Universal Induction Intelligence Expectimax Tree/Algorithm r Vσ (yxk ) = max Vσ (yxk yk ) ∗ ∗  d max yk   {z d action y with max value. | } k  k= 0 yk= 1 y d   d X q  dq Vσ (yxk yk ) = [rk + Vσ (yx1:k )]σ(xk |yxk yk ) ∗ ∗ ¡e ¡e xk E E ¡|{z}e ¡|{z}e σ expected reward r and observation o . k k ¡k= 0 e ok= 1 o ok= 0 ¡ ok= 1 e ¡rk= ... e rk= ... rk= ... ¡ rk= ... e q¡ eq ¡ q eq Vσ (yx1:k ) = max Vσ (yx1:k yk+1 ) ∗ ∗ yk+1 ¡e yk+1 ¡e yk+1 ¡e yk+1 ¡e ¡maxe ¡ e ¡maxe ¡ e ¡maxe ¡ e ¡maxe ¡ e ··· ··· ··· ··· ··· ··· ··· ···
  • 69. Marcus Hutter - 69 - Universal Induction Intelligence Known environment µ • Assumption: µ is the true environment in which the agent operates • Then, policy pµ is optimal in the sense that no other policy for an agent leads to higher µAI -expected reward. • Special choices of µ: deterministic or adversarial environments, Markov decision processes (mdps), adversarial environments. • There is no principle problem in computing the optimal action yk as long as µAI is known and computable and X , Y and m are finite. • Things drastically change if µAI is unknown ...
  • 70. Marcus Hutter - 70 - Universal Induction Intelligence The Bayes-mixture distribution ξ Assumption: The true environment µ is unknown. Bayesian approach: The true probability distribution µAI is not learned directly, but is replaced by a Bayes-mixture ξ AI . Assumption: We know that the true environment µ is contained in some known (finite or countable) set M of environments. The Bayes-mixture ξ is defined as ξ(x1:m |y1:m ) := wν ν(x1:m |y1:m ) with wν = 1, wν 0 ∀ν ν∈M ν∈M The weights wν may be interpreted as the prior degree of belief that the true environment is ν. Then ξ(x1:m |y1:m ) could be interpreted as the prior subjective belief probability in observing x1:m , given actions y1:m .
  • 71. Marcus Hutter - 71 - Universal Induction Intelligence Questions of Interest • It is natural to follow the policy pξ which maximizes Vξp . • If µ is the true environment the expected reward when following ξ pξ policy p will be Vµ . µ pµ ∗ • The optimal (but infeasible) policy p yields reward Vµ ≡ Vµ . pξ • Are there policies with uniformly larger value than Vµ ? pξ ∗ • How close is Vµ to Vµ ? • What is the most general class M and weights wν .
  • 72. Marcus Hutter - 72 - Universal Induction Intelligence A universal choice of ξ and M • We have to assume the existence of some structure on the environment to avoid the No-Free-Lunch Theorems [Wolpert 96]. • We can only unravel effective structures which are describable by (semi)computable probability distributions. • So we may include all (semi)computable (semi)distributions in M. • Occam’s razor and Epicurus’ principle of multiple explanations tell us to assign high prior belief to simple environments. • Using Kolmogorov’s universal complexity measure K(ν) for environments ν one should set wν ∼ 2−K(ν) , where K(ν) is the length of the shortest program on a universal TM computing ν. • The resulting AIXI model [Hutter:00] is a unification of (Bellman’s) sequential decision and Solomonoff’s universal induction theory.
  • 73. Marcus Hutter - 73 - Universal Induction Intelligence The AIXI Model in one Line The most intelligent unbiased learning agent yk = arg max ... max [r(xk )+...+r(xm )] 2− (q) yk x ym x q : U (q,y1:m )=x1:m k m is an elegant mathematical theory of AI Claim: AIXI is the most intelligent environmental independent, i.e. universally optimal, agent possible. Proof: For formalizations, quantifications, and proofs, see [Hut05]. Applications: Strategic Games, Function Minimization, Supervised Learning from Examples, Sequence Prediction, Classification. In the following we consider generic M and wν .
  • 74. Marcus Hutter - 74 - Universal Induction Intelligence ξ Pareto-Optimality of p Policy pξ is Pareto-optimal in the sense that there is no other policy p p pξ with Vν ≥ Vν for all ν ∈ M and strict inequality for at least one ν. Self-optimizing Policies Under which circumstances does the value of the universal policy pξ converge to optimum? pξ Vν → Vν∗ for horizon m → ∞ for all ν ∈ M. (1) The least we must demand from M to have a chance that (1) is true is that there exists some policy p at all with this property, i.e. ˜ ∃˜ : Vνp → Vν∗ p ˜ for horizon m → ∞ for all ν ∈ M. (2) Main result: (2) ⇒ (1): The necessary condition of the existence of a self-optimizing policy p is also sufficient for pξ to be self-optimizing. ˜
  • 75. Marcus Hutter - 75 - Universal Induction Intelligence Environments w. (Non)Self-Optimizing Policies
  • 76. Marcus Hutter - 76 - Universal Induction Intelligence Particularly Interesting Environments • Sequence Prediction, e.g. weather or stock-market prediction. ∗ pξ K(µ) Strong result: Vµ − Vµ = O( m ), m =horizon. • Strategic Games: Learn to play well (minimax) strategic zero-sum games (like chess) or even exploit limited capabilities of opponent. • Optimization: Find (approximate) minimum of function with as few function calls as possible. Difficult exploration versus exploitation problem. • Supervised learning: Learn functions by presenting (z, f (z)) pairs and ask for function values of z by presenting (z , ?) pairs. Supervised learning is much faster than reinforcement learning. AIξ quickly learns to predict, play games, optimize, and learn supervised.
  • 77. Marcus Hutter - 77 - Universal Induction Intelligence Universal Rational Agents: Summary • Setup: Agents acting in general probabilistic environments with reinforcement feedback. • Assumptions: Unknown true environment µ belongs to a known class of environments M. • Results: The Bayes-optimal policy pξ based on the Bayes-mixture ξ = ν∈M wν ν is Pareto-optimal and self-optimizing if M admits self-optimizing policies. • We have reduced the AI problem to pure computational questions (which are addressed in the time-bounded AIXItl). • AIξ incorporates all aspects of intelligence (apart comp.-time). • How to choose horizon: use future value and universal discounting. • ToDo: prove (optimality) properties, scale down, implement.
  • 78. Marcus Hutter - 78 - Universal Induction Intelligence Wrap Up • Setup: Given (non)iid data D = (x1 , ..., xn ), predict xn+1 • Ultimate goal is to maximize profit or minimize loss • Consider Models/Hypothesis Hi ∈ M • Max.Likelihood: Hbest = arg maxi p(D|Hi ) (overfits if M large) • Bayes: Posterior probability of Hi is p(Hi |D) ∝ p(D|Hi )p(Hi ) • Bayes needs prior(Hi ) • Occam+Epicurus: High prior for simple models. • Kolmogorov/Solomonoff: Quantification of simplicity/complexity • Bayes works if D is sampled from Htrue ∈ M • Universal AI = Universal Induction + Sequential Decision Theory
  • 79. Marcus Hutter - 79 - Universal Induction Intelligence Literature [CV05] R. Cilibrasi and P. M. B. Vit´nyi. Clustering by compression. IEEE a Trans. Information Theory, 51(4):1523–1545, 2005. http://arXiv.org/abs/cs/0312044 [Hut05] M. Hutter. Universal Artificial Intelligence: Sequential Decisions based on Algorithmic Probability. Springer, Berlin, 2005. http://www.hutter1.net/ai/uaibook.htm. [Hut07] M. Hutter. On universal prediction and Bayesian confirmation. Theoretical Computer Science, 384(1):33–48, 2007. http://arxiv.org/abs/0709.1516 [LH07] S. Legg and M. Hutter. Universal intelligence: a definition of machine intelligence. Minds Machines, 17(4):391–444, 2007. http://dx.doi.org/10.1007/s11023-007-9079-x
  • 80. Marcus Hutter - 80 - Universal Induction Intelligence Thanks! Questions? Details: Jobs: PostDoc and PhD positions at RSISE and NICTA, Australia Projects at http://www.hutter1.net/ A Unified View of Artificial Intelligence = = Decision Theory = Probability + Utility Theory + + Universal Induction = Ockham + Bayes + Turing Open research problems at www.hutter1.net/ai/uaibook.htm Compression competition with 50’000 Euro prize at prize.hutter1.net