Enviar búsqueda
Cargar
2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…
•
3 recomendaciones
•
536 vistas
Dongseo University
Seguir
Educación
Tecnología
Denunciar
Compartir
Denunciar
Compartir
1 de 115
Descargar ahora
Descargar para leer sin conexión
Recomendados
04 image enhancement edge detection
04 image enhancement edge detection
Rumah Belajar
Noise
Noise
Astha Jain
Cloud sim
Cloud sim
Khyati Rajput
Virtualization in cloud computing ppt
Virtualization in cloud computing ppt
Mehul Patel
Cloud interoperability
Cloud interoperability
gaurav jain
Virtual machine
Virtual machine
IGZ Software house
Enhancement in frequency domain
Enhancement in frequency domain
Ashish Kumar
Virtualization in cloud
Virtualization in cloud
Ashok Kumar
Recomendados
04 image enhancement edge detection
04 image enhancement edge detection
Rumah Belajar
Noise
Noise
Astha Jain
Cloud sim
Cloud sim
Khyati Rajput
Virtualization in cloud computing ppt
Virtualization in cloud computing ppt
Mehul Patel
Cloud interoperability
Cloud interoperability
gaurav jain
Virtual machine
Virtual machine
IGZ Software house
Enhancement in frequency domain
Enhancement in frequency domain
Ashish Kumar
Virtualization in cloud
Virtualization in cloud
Ashok Kumar
Color image processing
Color image processing
Madhuri Sachane
Virtualization
Virtualization
Birju Tank
Presentation of Lossy compression
Presentation of Lossy compression
Omar Ghazi
Content Based Image Retrieval
Content Based Image Retrieval
Swati Chauhan
Introduction to distributed file systems
Introduction to distributed file systems
Viet-Trung TRAN
cloud virtualization technology
cloud virtualization technology
Ravindra Dastikop
Motion estimation overview
Motion estimation overview
Yoss Cohen
COM2304: Color and Color Models
COM2304: Color and Color Models
Hemantha Kulathilake
Survey on cloud simulator
Survey on cloud simulator
Habibur Rahman
Multimedia chapter 2
Multimedia chapter 2
PrathimaBaliga
mean_filter
mean_filter
Satyajit Nag Mithu
Machine Learning vs. Deep Learning
Machine Learning vs. Deep Learning
Belatrix Software
6.frequency domain image_processing
6.frequency domain image_processing
Nashid Alam
Presentation disaster recovery in virtualization and cloud
Presentation disaster recovery in virtualization and cloud
solarisyourep
Module 4: Model Selection and Evaluation
Module 4: Model Selection and Evaluation
Sara Hooker
Chapter10 image segmentation
Chapter10 image segmentation
asodariyabhavesh
4.intensity transformations
4.intensity transformations
Yahya Alkhaldi
Digital Image Fundamentals
Digital Image Fundamentals
Kalyan Acharjya
Meta-Learning Presentation
Meta-Learning Presentation
AkshayaNagarajan10
MobileNet Review | Mobile Net Research Paper Review | MobileNet v1 Paper Expl...
MobileNet Review | Mobile Net Research Paper Review | MobileNet v1 Paper Expl...
Laxmi Kant Tiwari
2014-1 computer algorithm w07 notes
2014-1 computer algorithm w07 notes
Dongseo University
如畫的攝影
如畫的攝影
psjlew
Más contenido relacionado
La actualidad más candente
Color image processing
Color image processing
Madhuri Sachane
Virtualization
Virtualization
Birju Tank
Presentation of Lossy compression
Presentation of Lossy compression
Omar Ghazi
Content Based Image Retrieval
Content Based Image Retrieval
Swati Chauhan
Introduction to distributed file systems
Introduction to distributed file systems
Viet-Trung TRAN
cloud virtualization technology
cloud virtualization technology
Ravindra Dastikop
Motion estimation overview
Motion estimation overview
Yoss Cohen
COM2304: Color and Color Models
COM2304: Color and Color Models
Hemantha Kulathilake
Survey on cloud simulator
Survey on cloud simulator
Habibur Rahman
Multimedia chapter 2
Multimedia chapter 2
PrathimaBaliga
mean_filter
mean_filter
Satyajit Nag Mithu
Machine Learning vs. Deep Learning
Machine Learning vs. Deep Learning
Belatrix Software
6.frequency domain image_processing
6.frequency domain image_processing
Nashid Alam
Presentation disaster recovery in virtualization and cloud
Presentation disaster recovery in virtualization and cloud
solarisyourep
Module 4: Model Selection and Evaluation
Module 4: Model Selection and Evaluation
Sara Hooker
Chapter10 image segmentation
Chapter10 image segmentation
asodariyabhavesh
4.intensity transformations
4.intensity transformations
Yahya Alkhaldi
Digital Image Fundamentals
Digital Image Fundamentals
Kalyan Acharjya
Meta-Learning Presentation
Meta-Learning Presentation
AkshayaNagarajan10
MobileNet Review | Mobile Net Research Paper Review | MobileNet v1 Paper Expl...
MobileNet Review | Mobile Net Research Paper Review | MobileNet v1 Paper Expl...
Laxmi Kant Tiwari
La actualidad más candente
(20)
Color image processing
Color image processing
Virtualization
Virtualization
Presentation of Lossy compression
Presentation of Lossy compression
Content Based Image Retrieval
Content Based Image Retrieval
Introduction to distributed file systems
Introduction to distributed file systems
cloud virtualization technology
cloud virtualization technology
Motion estimation overview
Motion estimation overview
COM2304: Color and Color Models
COM2304: Color and Color Models
Survey on cloud simulator
Survey on cloud simulator
Multimedia chapter 2
Multimedia chapter 2
mean_filter
mean_filter
Machine Learning vs. Deep Learning
Machine Learning vs. Deep Learning
6.frequency domain image_processing
6.frequency domain image_processing
Presentation disaster recovery in virtualization and cloud
Presentation disaster recovery in virtualization and cloud
Module 4: Model Selection and Evaluation
Module 4: Model Selection and Evaluation
Chapter10 image segmentation
Chapter10 image segmentation
4.intensity transformations
4.intensity transformations
Digital Image Fundamentals
Digital Image Fundamentals
Meta-Learning Presentation
Meta-Learning Presentation
MobileNet Review | Mobile Net Research Paper Review | MobileNet v1 Paper Expl...
MobileNet Review | Mobile Net Research Paper Review | MobileNet v1 Paper Expl...
Destacado
2014-1 computer algorithm w07 notes
2014-1 computer algorithm w07 notes
Dongseo University
如畫的攝影
如畫的攝影
psjlew
印象.西湖.雨
印象.西湖.雨
psjlew
Lektion 13 april
Lektion 13 april
LärarSarah Lombrant
Guangxi
Guangxi
psjlew
罕见的
罕见的
psjlew
看照片了解歷史
看照片了解歷史
psjlew
Week 1: Algorithmic Game Theory Notes
Week 1: Algorithmic Game Theory Notes
Dongseo University
揭秘全球最大金庫
揭秘全球最大金庫
psjlew
美國人看中國人
美國人看中國人
psjlew
Week 8: Algorithmic Game Theory Notes
Week 8: Algorithmic Game Theory Notes
Dongseo University
難以想象的元陽梯田
難以想象的元陽梯田
psjlew
Ip crammer presentation 2013
Ip crammer presentation 2013
Anneke Weber
Project oz 2011 Claudia
Project oz 2011 Claudia
Vanessa Johnston
認識自已臨終前的樣子
認識自已臨終前的樣子
psjlew
Hang Around Victor Day 2009
Hang Around Victor Day 2009
taturner2
蒋介石与毛泽东的面相
蒋介石与毛泽东的面相
psjlew
民谚集成
民谚集成
psjlew
Outlook for Canada’s Economy
Outlook for Canada’s Economy
Nulogx
2013-1 Machine Learning Lecture 04 - Torsten Reil - Artificial Neural Netw…
2013-1 Machine Learning Lecture 04 - Torsten Reil - Artificial Neural Netw…
Dongseo University
Destacado
(20)
2014-1 computer algorithm w07 notes
2014-1 computer algorithm w07 notes
如畫的攝影
如畫的攝影
印象.西湖.雨
印象.西湖.雨
Lektion 13 april
Lektion 13 april
Guangxi
Guangxi
罕见的
罕见的
看照片了解歷史
看照片了解歷史
Week 1: Algorithmic Game Theory Notes
Week 1: Algorithmic Game Theory Notes
揭秘全球最大金庫
揭秘全球最大金庫
美國人看中國人
美國人看中國人
Week 8: Algorithmic Game Theory Notes
Week 8: Algorithmic Game Theory Notes
難以想象的元陽梯田
難以想象的元陽梯田
Ip crammer presentation 2013
Ip crammer presentation 2013
Project oz 2011 Claudia
Project oz 2011 Claudia
認識自已臨終前的樣子
認識自已臨終前的樣子
Hang Around Victor Day 2009
Hang Around Victor Day 2009
蒋介石与毛泽东的面相
蒋介石与毛泽东的面相
民谚集成
民谚集成
Outlook for Canada’s Economy
Outlook for Canada’s Economy
2013-1 Machine Learning Lecture 04 - Torsten Reil - Artificial Neural Netw…
2013-1 Machine Learning Lecture 04 - Torsten Reil - Artificial Neural Netw…
Similar a 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…
Prob18
Prob18
okeee
2013-1 Machine Learning Lecture 03 - Andrew Moore - bayes nets for represe…
2013-1 Machine Learning Lecture 03 - Andrew Moore - bayes nets for represe…
Dongseo University
Bayesian Networks
Bayesian Networks
guestfee8698
Statistics in Astronomy
Statistics in Astronomy
Peter Coles
Uncertainity
Uncertainity
Yasir Khan
Mit18 05 s14_class4slides
Mit18 05 s14_class4slides
Animesh Kumar
Probability for Data Miners
Probability for Data Miners
guestfee8698
Probability density in data mining and coveriance
Probability density in data mining and coveriance
udhayax793
  Information Theory and the Analysis of Uncertainties in a Spatial Geologi...
  Information Theory and the Analysis of Uncertainties in a Spatial Geologi...
The University of Western Australia
2013-1 Machine Learning Lecture 05 - Andrew Moore - Support Vector Machines
2013-1 Machine Learning Lecture 05 - Andrew Moore - Support Vector Machines
Dongseo University
Mle
Mle
Badri Patro
CSE357 fa21 (1) Course Intro and Probability 8-26.pdf
CSE357 fa21 (1) Course Intro and Probability 8-26.pdf
NermeenKamel7
svm-jain.ppt
svm-jain.ppt
shyam670838
Uncertain knowledge and reasoning
Uncertain knowledge and reasoning
Shiwani Gupta
Overfit10
Overfit10
okeee
Lecture9-bayes.pptx
Lecture9-bayes.pptx
TienChung4
Chapter 13
Chapter 13
Haron Shihab
Similar a 2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…
(17)
Prob18
Prob18
2013-1 Machine Learning Lecture 03 - Andrew Moore - bayes nets for represe…
2013-1 Machine Learning Lecture 03 - Andrew Moore - bayes nets for represe…
Bayesian Networks
Bayesian Networks
Statistics in Astronomy
Statistics in Astronomy
Uncertainity
Uncertainity
Mit18 05 s14_class4slides
Mit18 05 s14_class4slides
Probability for Data Miners
Probability for Data Miners
Probability density in data mining and coveriance
Probability density in data mining and coveriance
  Information Theory and the Analysis of Uncertainties in a Spatial Geologi...
  Information Theory and the Analysis of Uncertainties in a Spatial Geologi...
2013-1 Machine Learning Lecture 05 - Andrew Moore - Support Vector Machines
2013-1 Machine Learning Lecture 05 - Andrew Moore - Support Vector Machines
Mle
Mle
CSE357 fa21 (1) Course Intro and Probability 8-26.pdf
CSE357 fa21 (1) Course Intro and Probability 8-26.pdf
svm-jain.ppt
svm-jain.ppt
Uncertain knowledge and reasoning
Uncertain knowledge and reasoning
Overfit10
Overfit10
Lecture9-bayes.pptx
Lecture9-bayes.pptx
Chapter 13
Chapter 13
Más de Dongseo University
Lecture_NaturalPolicyGradientsTRPOPPO.pdf
Lecture_NaturalPolicyGradientsTRPOPPO.pdf
Dongseo University
Evolutionary Computation Lecture notes03
Evolutionary Computation Lecture notes03
Dongseo University
Evolutionary Computation Lecture notes02
Evolutionary Computation Lecture notes02
Dongseo University
Evolutionary Computation Lecture notes01
Evolutionary Computation Lecture notes01
Dongseo University
Markov Chain Monte Carlo
Markov Chain Monte Carlo
Dongseo University
Simplex Lecture Notes
Simplex Lecture Notes
Dongseo University
Reinforcement Learning
Reinforcement Learning
Dongseo University
Median of Medians
Median of Medians
Dongseo University
Average Linear Selection Algorithm
Average Linear Selection Algorithm
Dongseo University
Lower Bound of Comparison Sort
Lower Bound of Comparison Sort
Dongseo University
Running Time of Building Binary Heap using Array
Running Time of Building Binary Heap using Array
Dongseo University
Running Time of MergeSort
Running Time of MergeSort
Dongseo University
Binary Trees
Binary Trees
Dongseo University
Proof By Math Induction Example
Proof By Math Induction Example
Dongseo University
TRPO and PPO notes
TRPO and PPO notes
Dongseo University
Estimating probability distributions
Estimating probability distributions
Dongseo University
2018-2 Machine Learning (Wasserstein GAN and BEGAN)
2018-2 Machine Learning (Wasserstein GAN and BEGAN)
Dongseo University
2018-2 Machine Learning (Linear regression, Logistic regression)
2018-2 Machine Learning (Linear regression, Logistic regression)
Dongseo University
2017-2 ML W11 GAN #1
2017-2 ML W11 GAN #1
Dongseo University
2017-2 ML W9 Reinforcement Learning #5
2017-2 ML W9 Reinforcement Learning #5
Dongseo University
Más de Dongseo University
(20)
Lecture_NaturalPolicyGradientsTRPOPPO.pdf
Lecture_NaturalPolicyGradientsTRPOPPO.pdf
Evolutionary Computation Lecture notes03
Evolutionary Computation Lecture notes03
Evolutionary Computation Lecture notes02
Evolutionary Computation Lecture notes02
Evolutionary Computation Lecture notes01
Evolutionary Computation Lecture notes01
Markov Chain Monte Carlo
Markov Chain Monte Carlo
Simplex Lecture Notes
Simplex Lecture Notes
Reinforcement Learning
Reinforcement Learning
Median of Medians
Median of Medians
Average Linear Selection Algorithm
Average Linear Selection Algorithm
Lower Bound of Comparison Sort
Lower Bound of Comparison Sort
Running Time of Building Binary Heap using Array
Running Time of Building Binary Heap using Array
Running Time of MergeSort
Running Time of MergeSort
Binary Trees
Binary Trees
Proof By Math Induction Example
Proof By Math Induction Example
TRPO and PPO notes
TRPO and PPO notes
Estimating probability distributions
Estimating probability distributions
2018-2 Machine Learning (Wasserstein GAN and BEGAN)
2018-2 Machine Learning (Wasserstein GAN and BEGAN)
2018-2 Machine Learning (Linear regression, Logistic regression)
2018-2 Machine Learning (Linear regression, Logistic regression)
2017-2 ML W11 GAN #1
2017-2 ML W11 GAN #1
2017-2 ML W9 Reinforcement Learning #5
2017-2 ML W9 Reinforcement Learning #5
Último
Daily Lesson Plan in Mathematics Quarter 4
Daily Lesson Plan in Mathematics Quarter 4
JOYLYNSAMANIEGO
Student Profile Sample - We help schools to connect the data they have, with ...
Student Profile Sample - We help schools to connect the data they have, with ...
Seán Kennedy
MS4 level being good citizen -imperative- (1) (1).pdf
MS4 level being good citizen -imperative- (1) (1).pdf
Mr Bounab Samir
Q-Factor General Quiz-7th April 2024, Quiz Club NITW
Q-Factor General Quiz-7th April 2024, Quiz Club NITW
Quiz Club NITW
Transaction Management in Database Management System
Transaction Management in Database Management System
Christalin Nelson
Congestive Cardiac Failure..presentation
Congestive Cardiac Failure..presentation
deepaannamalai16
Measures of Position DECILES for ungrouped data
Measures of Position DECILES for ungrouped data
BabyAnnMotar
Blowin' in the Wind of Caste_ Bob Dylan's Song as a Catalyst for Social Justi...
Blowin' in the Wind of Caste_ Bob Dylan's Song as a Catalyst for Social Justi...
DhatriParmar
Grade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdf
Grade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdf
Jemuel Francisco
Unraveling Hypertext_ Analyzing Postmodern Elements in Literature.pptx
Unraveling Hypertext_ Analyzing Postmodern Elements in Literature.pptx
DhatriParmar
Using Grammatical Signals Suitable to Patterns of Idea Development
Using Grammatical Signals Suitable to Patterns of Idea Development
chesterberbo7
Grade Three -ELLNA-REVIEWER-ENGLISH.pptx
Grade Three -ELLNA-REVIEWER-ENGLISH.pptx
karenfajardo43
Concurrency Control in Database Management system
Concurrency Control in Database Management system
Christalin Nelson
ROLES IN A STAGE PRODUCTION in arts.pptx
ROLES IN A STAGE PRODUCTION in arts.pptx
VanesaIglesias10
Paradigm shift in nursing research by RS MEHTA
Paradigm shift in nursing research by RS MEHTA
BP KOIRALA INSTITUTE OF HELATH SCIENCS,, NEPAL
Beauty Amidst the Bytes_ Unearthing Unexpected Advantages of the Digital Wast...
Beauty Amidst the Bytes_ Unearthing Unexpected Advantages of the Digital Wast...
DhatriParmar
Man or Manufactured_ Redefining Humanity Through Biopunk Narratives.pptx
Man or Manufactured_ Redefining Humanity Through Biopunk Narratives.pptx
DhatriParmar
DIFFERENT BASKETRY IN THE PHILIPPINES PPT.pptx
DIFFERENT BASKETRY IN THE PHILIPPINES PPT.pptx
MichelleTuguinay1
How to Make a Duplicate of Your Odoo 17 Database
How to Make a Duplicate of Your Odoo 17 Database
Celine George
Mattingly "AI & Prompt Design: Large Language Models"
Mattingly "AI & Prompt Design: Large Language Models"
National Information Standards Organization (NISO)
Último
(20)
Daily Lesson Plan in Mathematics Quarter 4
Daily Lesson Plan in Mathematics Quarter 4
Student Profile Sample - We help schools to connect the data they have, with ...
Student Profile Sample - We help schools to connect the data they have, with ...
MS4 level being good citizen -imperative- (1) (1).pdf
MS4 level being good citizen -imperative- (1) (1).pdf
Q-Factor General Quiz-7th April 2024, Quiz Club NITW
Q-Factor General Quiz-7th April 2024, Quiz Club NITW
Transaction Management in Database Management System
Transaction Management in Database Management System
Congestive Cardiac Failure..presentation
Congestive Cardiac Failure..presentation
Measures of Position DECILES for ungrouped data
Measures of Position DECILES for ungrouped data
Blowin' in the Wind of Caste_ Bob Dylan's Song as a Catalyst for Social Justi...
Blowin' in the Wind of Caste_ Bob Dylan's Song as a Catalyst for Social Justi...
Grade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdf
Grade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdf
Unraveling Hypertext_ Analyzing Postmodern Elements in Literature.pptx
Unraveling Hypertext_ Analyzing Postmodern Elements in Literature.pptx
Using Grammatical Signals Suitable to Patterns of Idea Development
Using Grammatical Signals Suitable to Patterns of Idea Development
Grade Three -ELLNA-REVIEWER-ENGLISH.pptx
Grade Three -ELLNA-REVIEWER-ENGLISH.pptx
Concurrency Control in Database Management system
Concurrency Control in Database Management system
ROLES IN A STAGE PRODUCTION in arts.pptx
ROLES IN A STAGE PRODUCTION in arts.pptx
Paradigm shift in nursing research by RS MEHTA
Paradigm shift in nursing research by RS MEHTA
Beauty Amidst the Bytes_ Unearthing Unexpected Advantages of the Digital Wast...
Beauty Amidst the Bytes_ Unearthing Unexpected Advantages of the Digital Wast...
Man or Manufactured_ Redefining Humanity Through Biopunk Narratives.pptx
Man or Manufactured_ Redefining Humanity Through Biopunk Narratives.pptx
DIFFERENT BASKETRY IN THE PHILIPPINES PPT.pptx
DIFFERENT BASKETRY IN THE PHILIPPINES PPT.pptx
How to Make a Duplicate of Your Odoo 17 Database
How to Make a Duplicate of Your Odoo 17 Database
Mattingly "AI & Prompt Design: Large Language Models"
Mattingly "AI & Prompt Design: Large Language Models"
2013-1 Machine Learning Lecture 03 - Andrew Moore - probabilistic and baye…
1.
Aug 25th, 2001Copyright
© 2001, Andrew W. Moore Probabilistic and Bayesian Analytics Andrew W. Moore Associate Professor School of Computer Science Carnegie Mellon University www.cs.cmu.edu/~awm awm@cs.cmu.edu 412-268-7599 Note to other teachers and users of these slides. Andrew would be delighted if you found this source material useful in giving your own lectures. Feel free to use these slides verbatim, or to modify them to fit your own needs. PowerPoint originals are available. If you make use of a significant portion of these slides in your own lecture, please include this message, or the following link to the source repository of Andrew’s tutorials: http://www.cs.cmu.edu/~awm/tutorials . Comments and corrections gratefully received.
2.
Probabilistic Analytics: Slide
2Copyright © 2001, Andrew W. Moore Probability • The world is a very uncertain place • 30 years of Artificial Intelligence and Database research danced around this fact • And then a few AI researchers decided to use some ideas from the eighteenth century
3.
Probabilistic Analytics: Slide
3Copyright © 2001, Andrew W. Moore What we’re going to do • We will review the fundamentals of probability. • It’s really going to be worth it • In this lecture, you’ll see an example of probabilistic analytics in action: Bayes Classifiers
4.
Probabilistic Analytics: Slide
4Copyright © 2001, Andrew W. Moore Discrete Random Variables • A is a Boolean-valued random variable if A denotes an event, and there is some degree of uncertainty as to whether A occurs. • Examples • A = The US president in 2023 will be male • A = You wake up tomorrow with a headache • A = You have Ebola
5.
Probabilistic Analytics: Slide
5Copyright © 2001, Andrew W. Moore Probabilities • We write P(A) as “the fraction of possible worlds in which A is true” • We could at this point spend 2 hours on the philosophy of this. • But we won’t.
6.
Probabilistic Analytics: Slide
6Copyright © 2001, Andrew W. Moore Visualizing A Event space of all possible worlds Its area is 1 Worlds in which A is False Worlds in which A is true P(A) = Area of reddish oval
7.
Probabilistic Analytics: Slide
7Copyright © 2001, Andrew W. Moore The Axioms of Probability • 0 <= P(A) <= 1 • P(True) = 1 • P(False) = 0 • P(A or B) = P(A) + P(B) - P(A and B) Where do these axioms come from? Were they “discovered”? Answers coming up later.
8.
Probabilistic Analytics: Slide
8Copyright © 2001, Andrew W. Moore Interpreting the axioms • 0 <= P(A) <= 1 • P(True) = 1 • P(False) = 0 • P(A or B) = P(A) + P(B) - P(A and B) The area of A can’t get any smaller than 0 And a zero area would mean no world could ever have A true
9.
Probabilistic Analytics: Slide
9Copyright © 2001, Andrew W. Moore Interpreting the axioms • 0 <= P(A) <= 1 • P(True) = 1 • P(False) = 0 • P(A or B) = P(A) + P(B) - P(A and B) The area of A can’t get any bigger than 1 And an area of 1 would mean all worlds will have A true
10.
Probabilistic Analytics: Slide
10Copyright © 2001, Andrew W. Moore Interpreting the axioms • 0 <= P(A) <= 1 • P(True) = 1 • P(False) = 0 • P(A or B) = P(A) + P(B) - P(A and B) A B
11.
Probabilistic Analytics: Slide
11Copyright © 2001, Andrew W. Moore Interpreting the axioms • 0 <= P(A) <= 1 • P(True) = 1 • P(False) = 0 • P(A or B) = P(A) + P(B) - P(A and B) A B P(A or B) BP(A and B) Simple addition and subtraction
12.
Probabilistic Analytics: Slide
12Copyright © 2001, Andrew W. Moore These Axioms are Not to be Trifled With • There have been attempts to do different methodologies for uncertainty • Fuzzy Logic • Three-valued logic • Dempster-Shafer • Non-monotonic reasoning • But the axioms of probability are the only system with this property: If you gamble using them you can’t be unfairly exploited by an opponent using some other system [di Finetti 1931]
13.
Probabilistic Analytics: Slide
13Copyright © 2001, Andrew W. Moore Theorems from the Axioms • 0 <= P(A) <= 1, P(True) = 1, P(False) = 0 • P(A or B) = P(A) + P(B) - P(A and B) From these we can prove: P(not A) = P(~A) = 1-P(A) • How?
14.
Probabilistic Analytics: Slide
14Copyright © 2001, Andrew W. Moore Side Note • I am inflicting these proofs on you for two reasons: 1. These kind of manipulations will need to be second nature to you if you use probabilistic analytics in depth 2. Suffering is good for you
15.
Probabilistic Analytics: Slide
15Copyright © 2001, Andrew W. Moore Another important theorem • 0 <= P(A) <= 1, P(True) = 1, P(False) = 0 • P(A or B) = P(A) + P(B) - P(A and B) From these we can prove: P(A) = P(A ^ B) + P(A ^ ~B) • How?
16.
Probabilistic Analytics: Slide
16Copyright © 2001, Andrew W. Moore Multivalued Random Variables • Suppose A can take on more than 2 values • A is a random variable with arity k if it can take on exactly one value out of {v1,v2, .. vk} • Thus… jivAvAP ji if0)( 1)( 21 kvAvAvAP
17.
Probabilistic Analytics: Slide
17Copyright © 2001, Andrew W. Moore An easy fact about Multivalued Random Variables: • Using the axioms of probability… 0 <= P(A) <= 1, P(True) = 1, P(False) = 0 P(A or B) = P(A) + P(B) - P(A and B) • And assuming that A obeys… • It’s easy to prove that jivAvAP ji if0)( 1)( 21 kvAvAvAP )()( 1 21 i j ji vAPvAvAvAP
18.
Probabilistic Analytics: Slide
18Copyright © 2001, Andrew W. Moore An easy fact about Multivalued Random Variables: • Using the axioms of probability… 0 <= P(A) <= 1, P(True) = 1, P(False) = 0 P(A or B) = P(A) + P(B) - P(A and B) • And assuming that A obeys… • It’s easy to prove that jivAvAP ji if0)( 1)( 21 kvAvAvAP )()( 1 21 i j ji vAPvAvAvAP • And thus we can prove 1)( 1 k j jvAP
19.
Probabilistic Analytics: Slide
19Copyright © 2001, Andrew W. Moore Another fact about Multivalued Random Variables: • Using the axioms of probability… 0 <= P(A) <= 1, P(True) = 1, P(False) = 0 P(A or B) = P(A) + P(B) - P(A and B) • And assuming that A obeys… • It’s easy to prove that jivAvAP ji if0)( 1)( 21 kvAvAvAP )(])[( 1 21 i j ji vABPvAvAvABP
20.
Probabilistic Analytics: Slide
20Copyright © 2001, Andrew W. Moore Another fact about Multivalued Random Variables: • Using the axioms of probability… 0 <= P(A) <= 1, P(True) = 1, P(False) = 0 P(A or B) = P(A) + P(B) - P(A and B) • And assuming that A obeys… • It’s easy to prove that jivAvAP ji if0)( 1)( 21 kvAvAvAP )(])[( 1 21 i j ji vABPvAvAvABP • And thus we can prove )()( 1 k j jvABPBP
21.
Probabilistic Analytics: Slide
21Copyright © 2001, Andrew W. Moore Elementary Probability in Pictures • P(~A) + P(A) = 1
22.
Probabilistic Analytics: Slide
22Copyright © 2001, Andrew W. Moore Elementary Probability in Pictures • P(B) = P(B ^ A) + P(B ^ ~A)
23.
Probabilistic Analytics: Slide
23Copyright © 2001, Andrew W. Moore Elementary Probability in Pictures 1)( 1 k j jvAP
24.
Probabilistic Analytics: Slide
24Copyright © 2001, Andrew W. Moore Elementary Probability in Pictures )()( 1 k j jvABPBP
25.
Probabilistic Analytics: Slide
25Copyright © 2001, Andrew W. Moore Conditional Probability • P(A|B) = Fraction of worlds in which B is true that also have A true F H H = “Have a headache” F = “Coming down with Flu” P(H) = 1/10 P(F) = 1/40 P(H|F) = 1/2 “Headaches are rare and flu is rarer, but if you’re coming down with ‘flu there’s a 50- 50 chance you’ll have a headache.”
26.
Probabilistic Analytics: Slide
26Copyright © 2001, Andrew W. Moore Conditional Probability F H H = “Have a headache” F = “Coming down with Flu” P(H) = 1/10 P(F) = 1/40 P(H|F) = 1/2 P(H|F) = Fraction of flu-inflicted worlds in which you have a headache = #worlds with flu and headache ------------------------------------ #worlds with flu = Area of “H and F” region ------------------------------ Area of “F” region = P(H ^ F) ----------- P(F)
27.
Probabilistic Analytics: Slide
27Copyright © 2001, Andrew W. Moore Definition of Conditional Probability P(A ^ B) P(A|B) = ----------- P(B) Corollary: The Chain Rule P(A ^ B) = P(A|B) P(B)
28.
Probabilistic Analytics: Slide
28Copyright © 2001, Andrew W. Moore Probabilistic Inference F H H = “Have a headache” F = “Coming down with Flu” P(H) = 1/10 P(F) = 1/40 P(H|F) = 1/2 One day you wake up with a headache. You think: “Drat! 50% of flus are associated with headaches so I must have a 50-50 chance of coming down with flu” Is this reasoning good?
29.
Probabilistic Analytics: Slide
29Copyright © 2001, Andrew W. Moore Probabilistic Inference F H H = “Have a headache” F = “Coming down with Flu” P(H) = 1/10 P(F) = 1/40 P(H|F) = 1/2 P(F ^ H) = … P(F|H) = …
30.
Probabilistic Analytics: Slide
30Copyright © 2001, Andrew W. Moore Another way to understand the intuition Thanks to Jahanzeb Sherwani for contributing this explanation:
31.
Probabilistic Analytics: Slide
31Copyright © 2001, Andrew W. Moore What we just did… P(A ^ B) P(A|B) P(B) P(B|A) = ----------- = --------------- P(A) P(A) This is Bayes Rule Bayes, Thomas (1763) An essay towards solving a problem in the doctrine of chances. Philosophical Transactions of the Royal Society of London, 53:370-418
32.
Probabilistic Analytics: Slide
32Copyright © 2001, Andrew W. Moore Using Bayes Rule to Gamble The “Win” envelope has a dollar and four beads in it $1.00 The “Lose” envelope has three beads and no money Trivial question: someone draws an envelope at random and offers to sell it to you. How much should you pay?
33.
Probabilistic Analytics: Slide
33Copyright © 2001, Andrew W. Moore Using Bayes Rule to Gamble The “Win” envelope has a dollar and four beads in it $1.00 The “Lose” envelope has three beads and no money Interesting question: before deciding, you are allowed to see one bead drawn from the envelope. Suppose it’s black: How much should you pay? Suppose it’s red: How much should you pay?
34.
Probabilistic Analytics: Slide
34Copyright © 2001, Andrew W. Moore Calculation… $1.00
35.
Probabilistic Analytics: Slide
35Copyright © 2001, Andrew W. Moore More General Forms of Bayes Rule )(~)|~()()|( )()|( )|( APABPAPABP APABP BAP )( )()|( )|( XBP XAPXABP XBAP
36.
Probabilistic Analytics: Slide
36Copyright © 2001, Andrew W. Moore More General Forms of Bayes Rule An k kk ii i vAPvABP vAPvABP BvAP 1 )()|( )()|( )|(
37.
Probabilistic Analytics: Slide
37Copyright © 2001, Andrew W. Moore Useful Easy-to-prove facts 1)|()|( BAPBAP 1)|( 1 An k k BvAP
38.
Probabilistic Analytics: Slide
38Copyright © 2001, Andrew W. Moore The Joint Distribution Recipe for making a joint distribution of M variables: Example: Boolean variables A, B, C
39.
Probabilistic Analytics: Slide
39Copyright © 2001, Andrew W. Moore The Joint Distribution Recipe for making a joint distribution of M variables: 1. Make a truth table listing all combinations of values of your variables (if there are M Boolean variables then the table will have 2M rows). Example: Boolean variables A, B, C A B C 0 0 0 0 0 1 0 1 0 0 1 1 1 0 0 1 0 1 1 1 0 1 1 1
40.
Probabilistic Analytics: Slide
40Copyright © 2001, Andrew W. Moore The Joint Distribution Recipe for making a joint distribution of M variables: 1. Make a truth table listing all combinations of values of your variables (if there are M Boolean variables then the table will have 2M rows). 2. For each combination of values, say how probable it is. Example: Boolean variables A, B, C A B C Prob 0 0 0 0.30 0 0 1 0.05 0 1 0 0.10 0 1 1 0.05 1 0 0 0.05 1 0 1 0.10 1 1 0 0.25 1 1 1 0.10
41.
Probabilistic Analytics: Slide
41Copyright © 2001, Andrew W. Moore The Joint Distribution Recipe for making a joint distribution of M variables: 1. Make a truth table listing all combinations of values of your variables (if there are M Boolean variables then the table will have 2M rows). 2. For each combination of values, say how probable it is. 3. If you subscribe to the axioms of probability, those numbers must sum to 1. Example: Boolean variables A, B, C A B C Prob 0 0 0 0.30 0 0 1 0.05 0 1 0 0.10 0 1 1 0.05 1 0 0 0.05 1 0 1 0.10 1 1 0 0.25 1 1 1 0.10 A B C0.05 0.25 0.10 0.050.05 0.10 0.10 0.30
42.
Probabilistic Analytics: Slide
42Copyright © 2001, Andrew W. Moore Using the Joint One you have the JD you can ask for the probability of any logical expression involving your attribute E PEP matchingrows )row()(
43.
Probabilistic Analytics: Slide
43Copyright © 2001, Andrew W. Moore Using the Joint P(Poor Male) = 0.4654 E PEP matchingrows )row()(
44.
Probabilistic Analytics: Slide
44Copyright © 2001, Andrew W. Moore Using the Joint P(Poor) = 0.7604 E PEP matchingrows )row()(
45.
Probabilistic Analytics: Slide
45Copyright © 2001, Andrew W. Moore Inference with the Joint 2 21 matchingrows andmatchingrows 2 21 21 )row( )row( )( )( )|( E EE P P EP EEP EEP
46.
Probabilistic Analytics: Slide
46Copyright © 2001, Andrew W. Moore Inference with the Joint 2 21 matchingrows andmatchingrows 2 21 21 )row( )row( )( )( )|( E EE P P EP EEP EEP P(Male | Poor) = 0.4654 / 0.7604 = 0.612
47.
Probabilistic Analytics: Slide
47Copyright © 2001, Andrew W. Moore Inference is a big deal • I’ve got this evidence. What’s the chance that this conclusion is true? • I’ve got a sore neck: how likely am I to have meningitis? • I see my lights are out and it’s 9pm. What’s the chance my spouse is already asleep?
48.
Probabilistic Analytics: Slide
48Copyright © 2001, Andrew W. Moore Inference is a big deal • I’ve got this evidence. What’s the chance that this conclusion is true? • I’ve got a sore neck: how likely am I to have meningitis? • I see my lights are out and it’s 9pm. What’s the chance my spouse is already asleep?
49.
Probabilistic Analytics: Slide
49Copyright © 2001, Andrew W. Moore Inference is a big deal • I’ve got this evidence. What’s the chance that this conclusion is true? • I’ve got a sore neck: how likely am I to have meningitis? • I see my lights are out and it’s 9pm. What’s the chance my spouse is already asleep? • There’s a thriving set of industries growing based around Bayesian Inference. Highlights are: Medicine, Pharma, Help Desk Support, Engine Fault Diagnosis
50.
Probabilistic Analytics: Slide
50Copyright © 2001, Andrew W. Moore Where do Joint Distributions come from? • Idea One: Expert Humans • Idea Two: Simpler probabilistic facts and some algebra Example: Suppose you knew P(A) = 0.7 P(B|A) = 0.2 P(B|~A) = 0.1 P(C|A^B) = 0.1 P(C|A^~B) = 0.8 P(C|~A^B) = 0.3 P(C|~A^~B) = 0.1 Then you can automatically compute the JD using the chain rule P(A=x ^ B=y ^ C=z) = P(C=z|A=x^ B=y) P(B=y|A=x) P(A=x) In another lecture: Bayes Nets, a systematic way to do this.
51.
Probabilistic Analytics: Slide
51Copyright © 2001, Andrew W. Moore Where do Joint Distributions come from? • Idea Three: Learn them from data! Prepare to see one of the most impressive learning algorithms you’ll come across in the entire course….
52.
Probabilistic Analytics: Slide
52Copyright © 2001, Andrew W. Moore Learning a joint distribution Build a JD table for your attributes in which the probabilities are unspecified The fill in each row with recordsofnumbertotal rowmatchingrecords )row(ˆ P A B C Prob 0 0 0 ? 0 0 1 ? 0 1 0 ? 0 1 1 ? 1 0 0 ? 1 0 1 ? 1 1 0 ? 1 1 1 ? A B C Prob 0 0 0 0.30 0 0 1 0.05 0 1 0 0.10 0 1 1 0.05 1 0 0 0.05 1 0 1 0.10 1 1 0 0.25 1 1 1 0.10 Fraction of all records in which A and B are True but C is False
53.
Probabilistic Analytics: Slide
53Copyright © 2001, Andrew W. Moore Example of Learning a Joint • This Joint was obtained by learning from three attributes in the UCI “Adult” Census Database [Kohavi 1995]
54.
Probabilistic Analytics: Slide
54Copyright © 2001, Andrew W. Moore Where are we? • We have recalled the fundamentals of probability • We have become content with what JDs are and how to use them • And we even know how to learn JDs from data.
55.
Probabilistic Analytics: Slide
55Copyright © 2001, Andrew W. Moore Density Estimation • Our Joint Distribution learner is our first example of something called Density Estimation • A Density Estimator learns a mapping from a set of attributes to a Probability Density Estimator Probability Input Attributes
56.
Probabilistic Analytics: Slide
56Copyright © 2001, Andrew W. Moore Density Estimation • Compare it against the two other major kinds of models: Regressor Prediction of real-valued output Input Attributes Density Estimator Probability Input Attributes Classifier Prediction of categorical output Input Attributes
57.
Probabilistic Analytics: Slide
57Copyright © 2001, Andrew W. Moore Evaluating Density Estimation Regressor Prediction of real-valued output Input Attributes Density Estimator Probability Input Attributes Classifier Prediction of categorical output Input Attributes Test set Accuracy ? Test set Accuracy Test-set criterion for estimating performance on future data* * See the Decision Tree or Cross Validation lecture for more detail
58.
Probabilistic Analytics: Slide
58Copyright © 2001, Andrew W. Moore • Given a record x, a density estimator M can tell you how likely the record is: • Given a dataset with R records, a density estimator can tell you how likely the dataset is: (Under the assumption that all records were independently generated from the Density Estimator’s JD) Evaluating a density estimator R k kR |MP|MP|MP 1 21 )(ˆ)(ˆ)dataset(ˆ xxxx )(ˆ |MP x
59.
Probabilistic Analytics: Slide
59Copyright © 2001, Andrew W. Moore A small dataset: Miles Per Gallon From the UCI repository (thanks to Ross Quinlan) 192 Training Set Records mpg modelyear maker good 75to78 asia bad 70to74 america bad 75to78 europe bad 70to74 america bad 70to74 america bad 70to74 asia bad 70to74 asia bad 75to78 america : : : : : : : : : bad 70to74 america good 79to83 america bad 75to78 america good 79to83 america bad 75to78 america good 79to83 america good 79to83 america bad 70to74 america good 75to78 europe bad 75to78 europe
60.
Probabilistic Analytics: Slide
60Copyright © 2001, Andrew W. Moore A small dataset: Miles Per Gallon 192 Training Set Records mpg modelyear maker good 75to78 asia bad 70to74 america bad 75to78 europe bad 70to74 america bad 70to74 america bad 70to74 asia bad 70to74 asia bad 75to78 america : : : : : : : : : bad 70to74 america good 79to83 america bad 75to78 america good 79to83 america bad 75to78 america good 79to83 america good 79to83 america bad 70to74 america good 75to78 europe bad 75to78 europe
61.
Probabilistic Analytics: Slide
61Copyright © 2001, Andrew W. Moore A small dataset: Miles Per Gallon 192 Training Set Records mpg modelyear maker good 75to78 asia bad 70to74 america bad 75to78 europe bad 70to74 america bad 70to74 america bad 70to74 asia bad 70to74 asia bad 75to78 america : : : : : : : : : bad 70to74 america good 79to83 america bad 75to78 america good 79to83 america bad 75to78 america good 79to83 america good 79to83 america bad 70to74 america good 75to78 europe bad 75to78 europe 203- 1 21 103.4case)(in this )(ˆ)(ˆ)dataset(ˆ R k kR |MP|MP|MP xxxx
62.
Probabilistic Analytics: Slide
62Copyright © 2001, Andrew W. Moore Log Probabilities Since probabilities of datasets get so small we usually use log probabilities R k k R k k |MP|MP|MP 11 )(ˆlog)(ˆlog)dataset(ˆlog xx
63.
Probabilistic Analytics: Slide
63Copyright © 2001, Andrew W. Moore A small dataset: Miles Per Gallon 192 Training Set Records mpg modelyear maker good 75to78 asia bad 70to74 america bad 75to78 europe bad 70to74 america bad 70to74 america bad 70to74 asia bad 70to74 asia bad 75to78 america : : : : : : : : : bad 70to74 america good 79to83 america bad 75to78 america good 79to83 america bad 75to78 america good 79to83 america good 79to83 america bad 70to74 america good 75to78 europe bad 75to78 europe 466.19case)(in this )(ˆlog)(ˆlog)dataset(ˆlog 11 R k k R k k |MP|MP|MP xx
64.
Probabilistic Analytics: Slide
64Copyright © 2001, Andrew W. Moore Summary: The Good News • We have a way to learn a Density Estimator from data. • Density estimators can do many good things… • Can sort the records by probability, and thus spot weird records (anomaly detection) • Can do inference: P(E1|E2) Automatic Doctor / Help Desk etc • Ingredient for Bayes Classifiers (see later)
65.
Probabilistic Analytics: Slide
65Copyright © 2001, Andrew W. Moore Summary: The Bad News • Density estimation by directly learning the joint is trivial, mindless and dangerous
66.
Probabilistic Analytics: Slide
66Copyright © 2001, Andrew W. Moore Using a test set An independent test set with 196 cars has a worse log likelihood (actually it’s a billion quintillion quintillion quintillion quintillion times less likely) ….Density estimators can overfit. And the full joint density estimator is the overfittiest of them all!
67.
Probabilistic Analytics: Slide
67Copyright © 2001, Andrew W. Moore Overfitting Density Estimators If this ever happens, it means there are certain combinations that we learn are impossible 0)(ˆanyforif )(ˆlog)(ˆlog)testset(ˆlog 11 |MPk |MP|MP|MP k R k k R k k x xx
68.
Probabilistic Analytics: Slide
68Copyright © 2001, Andrew W. Moore Using a test set The only reason that our test set didn’t score -infinity is that my code is hard-wired to always predict a probability of at least one in 1020 We need Density Estimators that are less prone to overfitting
69.
Probabilistic Analytics: Slide
69Copyright © 2001, Andrew W. Moore Naïve Density Estimation The problem with the Joint Estimator is that it just mirrors the training data. We need something which generalizes more usefully. The naïve model generalizes strongly: Assume that each attribute is distributed independently of any of the other attributes.
70.
Probabilistic Analytics: Slide
70Copyright © 2001, Andrew W. Moore Independently Distributed Data • Let x[i] denote the i’th field of record x. • The independently distributed assumption says that for any i,v, u1 u2… ui-1 ui+1… uM )][( )][,]1[,]1[,]2[,]1[|][( 1121 vixP uMxuixuixuxuxvixP Mii • Or in other words, x[i] is independent of {x[1],x[2],..x[i-1], x[i+1],…x[M]} • This is often written as ]}[],1[],1[],2[],1[{][ Mxixixxxix
71.
Probabilistic Analytics: Slide
71Copyright © 2001, Andrew W. Moore A note about independence • Assume A and B are Boolean Random Variables. Then “A and B are independent” if and only if P(A|B) = P(A) • “A and B are independent” is often notated as BA
72.
Probabilistic Analytics: Slide
72Copyright © 2001, Andrew W. Moore Independence Theorems • Assume P(A|B) = P(A) • Then P(A^B) = = P(A) P(B) • Assume P(A|B) = P(A) • Then P(B|A) = = P(B)
73.
Probabilistic Analytics: Slide
73Copyright © 2001, Andrew W. Moore Independence Theorems • Assume P(A|B) = P(A) • Then P(~A|B) = = P(~A) • Assume P(A|B) = P(A) • Then P(A|~B) = = P(A)
74.
Probabilistic Analytics: Slide
74Copyright © 2001, Andrew W. Moore Multivalued Independence For multivalued Random Variables A and B, BA if and only if )()|(:, uAPvBuAPvu from which you can then prove things like… )()()(:, vBPuAPvBuAPvu )()|(:, vBPvAvBPvu
75.
Probabilistic Analytics: Slide
75Copyright © 2001, Andrew W. Moore Back to Naïve Density Estimation • Let x[i] denote the i’th field of record x: • Naïve DE assumes x[i] is independent of {x[1],x[2],..x[i-1], x[i+1],…x[M]} • Example: • Suppose that each record is generated by randomly shaking a green dice and a red dice • Dataset 1: A = red value, B = green value • Dataset 2: A = red value, B = sum of values • Dataset 3: A = sum of values, B = difference of values • Which of these datasets violates the naïve assumption?
76.
Probabilistic Analytics: Slide
76Copyright © 2001, Andrew W. Moore Using the Naïve Distribution • Once you have a Naïve Distribution you can easily compute any row of the joint distribution. • Suppose A, B, C and D are independently distributed. What is P(A^~B^C^~D)?
77.
Probabilistic Analytics: Slide
77Copyright © 2001, Andrew W. Moore Using the Naïve Distribution • Once you have a Naïve Distribution you can easily compute any row of the joint distribution. • Suppose A, B, C and D are independently distributed. What is P(A^~B^C^~D)? = P(A|~B^C^~D) P(~B^C^~D) = P(A) P(~B^C^~D) = P(A) P(~B|C^~D) P(C^~D) = P(A) P(~B) P(C^~D) = P(A) P(~B) P(C|~D) P(~D) = P(A) P(~B) P(C) P(~D)
78.
Probabilistic Analytics: Slide
78Copyright © 2001, Andrew W. Moore Naïve Distribution General Case • Suppose x[1], x[2], … x[M] are independently distributed. M k kM ukxPuMxuxuxP 1 21 )][()][,]2[,]1[( • So if we have a Naïve Distribution we can construct any row of the implied Joint Distribution on demand. • So we can do any inference • But how do we learn a Naïve Density Estimator?
79.
Probabilistic Analytics: Slide
79Copyright © 2001, Andrew W. Moore Learning a Naïve Density Estimator recordsofnumbertotal ][in whichrecords# )][(ˆ uix uixP Another trivial learning algorithm!
80.
Probabilistic Analytics: Slide
80Copyright © 2001, Andrew W. Moore Contrast Joint DE Naïve DE Can model anything Can model only very boring distributions No problem to model “C is a noisy copy of A” Outside Naïve’s scope Given 100 records and more than 6 Boolean attributes will screw up badly Given 100 records and 10,000 multivalued attributes will be fine
81.
Probabilistic Analytics: Slide
81Copyright © 2001, Andrew W. Moore Empirical Results: “Hopeless” The “hopeless” dataset consists of 40,000 records and 21 Boolean attributes called a,b,c, … u. Each attribute in each record is generated 50-50 randomly as 0 or 1. Despite the vast amount of data, “Joint” overfits hopelessly and does much worse Average test set log probability during 10 folds of k-fold cross-validation* Described in a future Andrew lecture
82.
Probabilistic Analytics: Slide
82Copyright © 2001, Andrew W. Moore Empirical Results: “Logical” The “logical” dataset consists of 40,000 records and 4 Boolean attributes called a,b,c,d where a,b,c are generated 50-50 randomly as 0 or 1. D = A^~C, except that in 10% of records it is flipped The DE learned by “Joint” The DE learned by “Naive”
83.
Probabilistic Analytics: Slide
83Copyright © 2001, Andrew W. Moore Empirical Results: “Logical” The “logical” dataset consists of 40,000 records and 4 Boolean attributes called a,b,c,d where a,b,c are generated 50-50 randomly as 0 or 1. D = A^~C, except that in 10% of records it is flipped The DE learned by “Joint” The DE learned by “Naive”
84.
Probabilistic Analytics: Slide
84Copyright © 2001, Andrew W. Moore A tiny part of the DE learned by “Joint” Empirical Results: “MPG” The “MPG” dataset consists of 392 records and 8 attributes The DE learned by “Naive”
85.
Probabilistic Analytics: Slide
85Copyright © 2001, Andrew W. Moore A tiny part of the DE learned by “Joint” Empirical Results: “MPG” The “MPG” dataset consists of 392 records and 8 attributes The DE learned by “Naive”
86.
Probabilistic Analytics: Slide
86Copyright © 2001, Andrew W. Moore The DE learned by “Joint” Empirical Results: “Weight vs. MPG” Suppose we train only from the “Weight” and “MPG” attributes The DE learned by “Naive”
87.
Probabilistic Analytics: Slide
87Copyright © 2001, Andrew W. Moore The DE learned by “Joint” Empirical Results: “Weight vs. MPG” Suppose we train only from the “Weight” and “MPG” attributes The DE learned by “Naive”
88.
Probabilistic Analytics: Slide
88Copyright © 2001, Andrew W. Moore The DE learned by “Joint” “Weight vs. MPG”: The best that Naïve can do The DE learned by “Naive”
89.
Probabilistic Analytics: Slide
89Copyright © 2001, Andrew W. Moore Reminder: The Good News • We have two ways to learn a Density Estimator from data. • *In other lectures we’ll see vastly more impressive Density Estimators (Mixture Models, Bayesian Networks, Density Trees, Kernel Densities and many more) • Density estimators can do many good things… • Anomaly detection • Can do inference: P(E1|E2) Automatic Doctor / Help Desk etc • Ingredient for Bayes Classifiers
90.
Probabilistic Analytics: Slide
90Copyright © 2001, Andrew W. Moore Bayes Classifiers • A formidable and sworn enemy of decision trees Classifier Prediction of categorical output Input Attributes DT BC
91.
Probabilistic Analytics: Slide
91Copyright © 2001, Andrew W. Moore How to build a Bayes Classifier • Assume you want to predict output Y which has arity nY and values v1, v2, … vny. • Assume there are m input attributes called X1, X2, … Xm • Break dataset into nY smaller datasets called DS1, DS2, … DSny. • Define DSi = Records in which Y=vi • For each DSi , learn Density Estimator Mi to model the input distribution among the Y=vi records.
92.
Probabilistic Analytics: Slide
92Copyright © 2001, Andrew W. Moore How to build a Bayes Classifier • Assume you want to predict output Y which has arity nY and values v1, v2, … vny. • Assume there are m input attributes called X1, X2, … Xm • Break dataset into nY smaller datasets called DS1, DS2, … DSny. • Define DSi = Records in which Y=vi • For each DSi , learn Density Estimator Mi to model the input distribution among the Y=vi records. • Mi estimates P(X1, X2, … Xm | Y=vi )
93.
Probabilistic Analytics: Slide
93Copyright © 2001, Andrew W. Moore How to build a Bayes Classifier • Assume you want to predict output Y which has arity nY and values v1, v2, … vny. • Assume there are m input attributes called X1, X2, … Xm • Break dataset into nY smaller datasets called DS1, DS2, … DSny. • Define DSi = Records in which Y=vi • For each DSi , learn Density Estimator Mi to model the input distribution among the Y=vi records. • Mi estimates P(X1, X2, … Xm | Y=vi ) • Idea: When a new set of input values (X1 = u1, X2 = u2, …. Xm = um) come along to be evaluated predict the value of Y that makes P(X1, X2, … Xm | Y=vi ) most likely )|(argmax 11 predict vYuXuXPY mm v Is this a good idea?
94.
Probabilistic Analytics: Slide
94Copyright © 2001, Andrew W. Moore How to build a Bayes Classifier • Assume you want to predict output Y which has arity nY and values v1, v2, … vny. • Assume there are m input attributes called X1, X2, … Xm • Break dataset into nY smaller datasets called DS1, DS2, … DSny. • Define DSi = Records in which Y=vi • For each DSi , learn Density Estimator Mi to model the input distribution among the Y=vi records. • Mi estimates P(X1, X2, … Xm | Y=vi ) • Idea: When a new set of input values (X1 = u1, X2 = u2, …. Xm = um) come along to be evaluated predict the value of Y that makes P(X1, X2, … Xm | Y=vi ) most likely )|(argmax 11 predict vYuXuXPY mm v Is this a good idea? This is a Maximum Likelihood classifier. It can get silly if some Ys are very unlikely
95.
Probabilistic Analytics: Slide
95Copyright © 2001, Andrew W. Moore How to build a Bayes Classifier • Assume you want to predict output Y which has arity nY and values v1, v2, … vny. • Assume there are m input attributes called X1, X2, … Xm • Break dataset into nY smaller datasets called DS1, DS2, … DSny. • Define DSi = Records in which Y=vi • For each DSi , learn Density Estimator Mi to model the input distribution among the Y=vi records. • Mi estimates P(X1, X2, … Xm | Y=vi ) • Idea: When a new set of input values (X1 = u1, X2 = u2, …. Xm = um) come along to be evaluated predict the value of Y that makes P(Y=vi | X1, X2, … Xm) most likely )|(argmax 11 predict mm v uXuXvYPY Is this a good idea? Much Better Idea
96.
Probabilistic Analytics: Slide
96Copyright © 2001, Andrew W. Moore Terminology • MLE (Maximum Likelihood Estimator): • MAP (Maximum A-Posteriori Estimator): )|(argmax 11 predict mm v uXuXvYPY )|(argmax 11 predict vYuXuXPY mm v
97.
Probabilistic Analytics: Slide
97Copyright © 2001, Andrew W. Moore Getting what we need )|(argmax 11 predict mm v uXuXvYPY
98.
Probabilistic Analytics: Slide
98Copyright © 2001, Andrew W. Moore Getting a posterior probability Yn j jjmm mm mm mm mm vYPvYuXuXP vYPvYuXuXP uXuXP vYPvYuXuXP uXuXvYP 1 11 11 11 11 11 )()|( )()|( )( )()|( )|(
99.
Probabilistic Analytics: Slide
99Copyright © 2001, Andrew W. Moore Bayes Classifiers in a nutshell )()|(argmax )|(argmax 11 11 predict vYPvYuXuXP uXuXvYPY mm v mm v 1. Learn the distribution over inputs for each value Y. 2. This gives P(X1, X2, … Xm | Y=vi ). 3. Estimate P(Y=vi ). as fraction of records with Y=vi . 4. For a new prediction:
100.
Probabilistic Analytics: Slide
100Copyright © 2001, Andrew W. Moore Bayes Classifiers in a nutshell )()|(argmax )|(argmax 11 11 predict vYPvYuXuXP uXuXvYPY mm v mm v 1. Learn the distribution over inputs for each value Y. 2. This gives P(X1, X2, … Xm | Y=vi ). 3. Estimate P(Y=vi ). as fraction of records with Y=vi . 4. For a new prediction: We can use our favorite Density Estimator here. Right now we have two options: •Joint Density Estimator •Naïve Density Estimator
101.
Probabilistic Analytics: Slide
101Copyright © 2001, Andrew W. Moore Joint Density Bayes Classifier )()|(argmax 11 predict vYPvYuXuXPY mm v In the case of the joint Bayes Classifier this degenerates to a very simple rule: Ypredict = the most common value of Y among records in which X1 = u1, X2 = u2, …. Xm = um. Note that if no records have the exact set of inputs X1 = u1, X2 = u2, …. Xm = um, then P(X1, X2, … Xm | Y=vi ) = 0 for all values of Y. In that case we just have to guess Y’s value
102.
Probabilistic Analytics: Slide
102Copyright © 2001, Andrew W. Moore Joint BC Results: “Logical” The “logical” dataset consists of 40,000 records and 4 Boolean attributes called a,b,c,d where a,b,c are generated 50-50 randomly as 0 or 1. D = A^~C, except that in 10% of records it is flipped The Classifier learned by “Joint BC”
103.
Probabilistic Analytics: Slide
103Copyright © 2001, Andrew W. Moore Joint BC Results: “All Irrelevant” The “all irrelevant” dataset consists of 40,000 records and 15 Boolean attributes called a,b,c,d..o where a,b,c are generated 50-50 randomly as 0 or 1. v (output) = 1 with probability 0.75, 0 with prob 0.25
104.
Probabilistic Analytics: Slide
104Copyright © 2001, Andrew W. Moore Naïve Bayes Classifier )()|(argmax 11 predict vYPvYuXuXPY mm v In the case of the naive Bayes Classifier this can be simplified: Yn j jj v vYuXPvYPY 1 predict )|()(argmax
105.
Probabilistic Analytics: Slide
105Copyright © 2001, Andrew W. Moore Naïve Bayes Classifier )()|(argmax 11 predict vYPvYuXuXPY mm v In the case of the naive Bayes Classifier this can be simplified: Yn j jj v vYuXPvYPY 1 predict )|()(argmax Technical Hint: If you have 10,000 input attributes that product will underflow in floating point math. You should use logs: Yn j jj v vYuXPvYPY 1 predict )|(log)(logargmax
106.
Probabilistic Analytics: Slide
106Copyright © 2001, Andrew W. Moore BC Results: “XOR” The “XOR” dataset consists of 40,000 records and 2 Boolean inputs called a and b, generated 50-50 randomly as 0 or 1. c (output) = a XOR b The Classifier learned by “Naive BC” The Classifier learned by “Joint BC”
107.
Probabilistic Analytics: Slide
107Copyright © 2001, Andrew W. Moore Naive BC Results: “Logical” The “logical” dataset consists of 40,000 records and 4 Boolean attributes called a,b,c,d where a,b,c are generated 50-50 randomly as 0 or 1. D = A^~C, except that in 10% of records it is flipped The Classifier learned by “Naive BC”
108.
Probabilistic Analytics: Slide
108Copyright © 2001, Andrew W. Moore Naive BC Results: “Logical” The “logical” dataset consists of 40,000 records and 4 Boolean attributes called a,b,c,d where a,b,c are generated 50-50 randomly as 0 or 1. D = A^~C, except that in 10% of records it is flipped The Classifier learned by “Joint BC” This result surprised Andrew until he had thought about it a little
109.
Probabilistic Analytics: Slide
109Copyright © 2001, Andrew W. Moore Naïve BC Results: “All Irrelevant” The “all irrelevant” dataset consists of 40,000 records and 15 Boolean attributes called a,b,c,d..o where a,b,c are generated 50-50 randomly as 0 or 1. v (output) = 1 with probability 0.75, 0 with prob 0.25 The Classifier learned by “Naive BC”
110.
Probabilistic Analytics: Slide
110Copyright © 2001, Andrew W. Moore BC Results: “MPG”: 392 records The Classifier learned by “Naive BC”
111.
Probabilistic Analytics: Slide
111Copyright © 2001, Andrew W. Moore BC Results: “MPG”: 40 records
112.
Probabilistic Analytics: Slide
112Copyright © 2001, Andrew W. Moore More Facts About Bayes Classifiers • Many other density estimators can be slotted in*. • Density estimation can be performed with real-valued inputs* • Bayes Classifiers can be built with real-valued inputs* • Rather Technical Complaint: Bayes Classifiers don’t try to be maximally discriminative---they merely try to honestly model what’s going on* • Zero probabilities are painful for Joint and Naïve. A hack (justifiable with the magic words “Dirichlet Prior”) can help*. • Naïve Bayes is wonderfully cheap. And survives 10,000 attributes cheerfully! *See future Andrew Lectures
113.
Probabilistic Analytics: Slide
113Copyright © 2001, Andrew W. Moore What you should know • Probability • Fundamentals of Probability and Bayes Rule • What’s a Joint Distribution • How to do inference (i.e. P(E1|E2)) once you have a JD • Density Estimation • What is DE and what is it good for • How to learn a Joint DE • How to learn a naïve DE
114.
Probabilistic Analytics: Slide
114Copyright © 2001, Andrew W. Moore What you should know • Bayes Classifiers • How to build one • How to predict with a BC • Contrast between naïve and joint BCs
115.
Probabilistic Analytics: Slide
115Copyright © 2001, Andrew W. Moore Interesting Questions • Suppose you were evaluating NaiveBC, JointBC, and Decision Trees • Invent a problem where only NaiveBC would do well • Invent a problem where only Dtree would do well • Invent a problem where only JointBC would do well • Invent a problem where only NaiveBC would do poorly • Invent a problem where only Dtree would do poorly • Invent a problem where only JointBC would do poorly
Descargar ahora