SlideShare una empresa de Scribd logo
1 de 40
Descargar para leer sin conexión
Ensemble
Algorithms
This course content is being actively
developed by Delta Analytics, a 501(c)3
Bay Area nonprofit that aims to
empower communities to leverage their
data for good.
Please reach out with any questions or
feedback to inquiry@deltanalytics.org.
Find out more about our mission here.
Delta Analytics builds technical capacity
around the world.
Module 6:
Ensemble approaches
❏ Ensemble approaches
❏ Bootstrap
❏ Bagging
❏ Random forest
❏ Boosting
Module Checklist:
What we’ve done:
Exploratory Analysis
Linear Regression (our first model)
Decision Trees (our second model)
Up next:
Even more models!
Where are we?
Question
Exploratory
Analysis
Modeling
Phase
Validation
Phase
Source: Udacity - Model Building and Validation
Building
intuition
Expanding
our toolkit
}
Recap: In this example, we
predict Sam’s weekend activity
using decision rules trained on
historical weekend behavior.
How much $$ do I have?
Raining? Girlfriend?
NY
Concert! Clubbing!Walk in the park!Movie!
Y Y NN
Decision
Tree Task
Defining f(x)
The decision tree f(x) predicts the
value of a target variable by
learning simple decision rules inferred
from the data features.
Our most important
predictive feature is Sam’s
budget. How do we know
this? Because it is the
root node.
Source: Friedman, Hastie, and Tibshirani. The elements of statistical learning. Vol. 1. Springer, Berlin: Springer series in statistics, 2001.
Decision
Tree Task
Recap: You are Sam’s weekend planner. What
should she do this weekend?
Let’s get this
weekend
started!
You’ve run a decision tree for
Sam, and now you’ve got a model.
But does it work well?
As always, we use our test data to
check our model, before we tell
him what to do with this
weekend.
Performance
Ability to
generalize to
unseen data
Our goal in evaluating
performance is to find
a sweet spot between
overfitting and
underfitting.
Recall our discussion of over and underfitting in previous modules:
Underfit Overfit
Sweet spot
Our most important goal is to build a
model that will generalize well to
unseen data.
Performance
How do we
measure
underfitting/
overfitting?
Figuring out if you are overfitting or
underfitting involves knowing how to
compare you train to to your test
results.
Underfit Overfit
Sweet spot
Training R2 Relationship Test R2 Condition
high > low
high ~ high
Sweet spot
low ~ low
low < high never happens
Overfitting
underfitting
How much $$ do I have?
Raining? Partner?
< $50
Concert! Clubbing!Walk in the park!Movie!
Y Y NN
>= $50
n=83
n=19
n=50 n=33
n=3n=31
Let’s see how this works in practice.
Firstly, we train our model f(x) using
training data.
n=30
n=104
train: 83
test: 21
Performance
Model
evaluation
1. Split data into train/test
2. Run model on train data
3. Test model on test data
Model error = Train True Y - Train Y*
How much $$ do I have?
Raining? Partner?
< $50
Concert! Clubbing!Walk in the park!Movie!
Y Y NN
1. Split data into train/test
2. Run model on train data
3. Test model on test data
Remember generalization error?
Review Module 4!
>= $50
Now, we use our f(x) developed using
training data to score unseen test
data.
n=104
train: 83
test: 21
Performance
Model
evaluation
Output: Test Y*
Inputs
(x’s)
TrueY
Test True Y
Generalization error = Test True Y - TestY*
The holdout set method is great - it lets us test our model on
unseen data, the most important metric for any model.
However, one potential problem arises:
What if our test dataset, even though it was picked randomly, is
unrepresentative of the data?
E.g. We managed to pick the 21 weekends in Sam’s dataset where he had just broken
up with his girlfriend, or failed a test, or fought with his friend, and ended up staying
home. Then our test set would say that our model is awful and didn’t predict Y*
accurately.
There are some shortcomings
associated with the hold out method as
a way to do model evaluation.
Performance
Model
evaluation
n=104
train: 83
test: 21
We can do better...
A powerful way to overcome any issues with
a biased single holdout is to run the model
many times.
If a single run of your model is one expert opining on the
data, an ensemble approach gathers a crowd of experts.
Source: Fortmann-Roe, Accurately Measuring Model Prediction Error.
http://scott.fortmann-roe.com/docs/MeasuringError.html
VS.
Our Model Our Model + model friends
Performance
Model
evaluation
Ensemble approaches
1. Bootstrapping
2. Bagging
3. Random forests
Central concept: teamwork!
● Bootstrap
aggregation, or
taking the average of
the predicted Y*s
from bootstrapped
samples
● Random forest is a
bagging method
● We are able to
calculate out-of-bag
error instead of
using test/train set
Ensemble Models: model cheat sheet
● Method of repeated
sampling with
replacement
Bootstrapping Bagging
● Iterative - each tree
learns from the tree
that was run last.
● The algorithm
weights each
training example by
how incorrectly it
was classified.
Boosting
Bootstrapping, bagging, random forests
and boosting all leverage a crowd of
experts.
Bootstrapping Bagging Random Forest Boosting
Bootstrapping is a resampling
method that takes random
samples with replacement
from whole dataset.
n=104
train: 83
test: 21
Instead of only using one holdout, we
repeatedly construct different holdouts
from the dataset.
Bootstrapping
Example of a single holdout split.
Bootstrapping repeats this many,
many times. We set the number
of holdouts as a
hyperparameter.
Bagging is an implementation of
bootstrapping: it involves taking the average
of the random samples drawn by
bootstrapping.
Bootstrapping Bagging Random Forest Boosting
We train multiple models on
random subsets of the datasets
and average the predictions.
By averaging the predictions, any
chance of unrepresentative
training sets is reduced.
Y*
Y*
Y*
Y*
Y*
}Average
Y*
Bagging improves upon a single holdout by taking the
average predicted Y* of boosted random samples.
Bagging
Which do you think does a
better job of estimating
true Y?
Y*
Y*
Y*
Y*
Y*
Average
Y*
n=104
train: 83
train: 83
train: 83
train: 83
train: 83
}
train: 83 Y*
Bagging Normal holdout
vs.
n=104
In Sam’s case, we still have the problem
unrepresentative train dataset. However, now
that we’re taking different train sets and
averaging them, the chance of an
unrepresentative training set over-influencing
the Y* is reduced.
Y*
Y*
Y*
Y*
Y*
}Average
Y*
train: 83
train: 83
train: 83
train: 83
train: 83
Bagging tends to always
outperform a single holdout.
Bagging
n=104
Out-of-Bag Score
Another amazing benefit of using bagging
algorithms is the out-of-bag score.
The out-of-bag score is the error rate of
observations not used in each decision tree.
Source:
https://www.quora.com/What-is-the-out-of-bag-error-in-Random-Forests
Bagging Out-of-bag score
Y*
Y*
Y*
Y*
Y*
train: 83
train: 83
train: 83
train: 83
train: 83
n=104
test = 104-83 = 21
test = 21
test = 21
test = 21
test = 21
Out-of-Bag Score
The out-of-bag score is the error rate of
observations not used in each decision tree.
Why it matters:
There is empirical evidence to show that the
out-of-bag estimate is as accurate as using a test
set of the same size as the training set. Therefore,
using the out-of-bag error estimate removes the need
for a set-aside test set.
Source: Breiman, 1996
Bagging Out-of-bag score
Y*
Y*
Y*
Y*
Y*
train: 83
train: 83
train: 83
train: 83
train: 83
n=104
test = 104-83 = 21
test = 21
test = 21
test = 21
test = 21
Out-of-Bag Score
Out-of-bag score can be calculated for any
bootstrap aggregation method, including:
- Random forest
- Bagging
- Boosting
Bagging Out-of-bag score
Y*
Y*
Y*
Y*
Y*
train: 83
train: 83
train: 83
train: 83
train: 83
n=104
test = 104-83 = 21
test = 21
test = 21
test = 21
test = 21
Is bagging perfect? What are some
potential tradeoffs?
One key trade off is that
training and assessing the
performance of every additional
holdout costs us computational
power and time.
The computational cost is
driven by the data sample size
and number of holdouts.
There are a few key limitations to
bagging.
Y*
Y*
Y*
Y*
Y*
}Average
Y*
Bagging
Subsets of the same data may split on the same features and result in very
similar predictions.
How much $$ do I have?
Raining? Girlfriend?
< $50
Concert! Clubbing!Walk in the park!Movie!
Y Y NN
>= $50
A key limitation of bagging is that it
may yield correlated (or very similar)
trees.
Bagging
How much $$ do I have?
Raining? Girlfriend?
< $50
Concert! Clubbing!Walk in the park!Movie!
Y Y NN
>= $50
How much $$ do I have?
Raining? Girlfriend?
< $50
Concert! Clubbing!Walk in the park!Movie!
Y Y NN
>= $50
Many identical trees becomes an
echo chamber of overfitted trees
that repeatedly yields a similar Y*
value, and repeatedly yields the
same important features. This gives
us false confidence in our results.
Y*
Y*
Y*
Y*
Y*
}Average
Y*
n=104
train: 83
train: 83
train: 83
train: 83
train: 83
We can do better ....
You’re doing
“great”!
Budget is the
best feature,
believe me
Correlated trees may give us false
confidence since they repeatedly yield the
same features.
Bagging
Random forest improves on bagging’s
tendency to result in correlated trees.
Bootstrapping Bagging Random Forest Boosting
Random forest improves upon bagging by
only considering a random subset of
features.
Source: https://dimensionless.in/introduction-to-random-forest/
Random forest is an
implementation of bagging. It
improves on bagging by
de-correlating trees.
At every split, it only considers
a random subset of the features.
I’m going to grow a
tree using a, b, c!
I’m going to grow a
tree using a, e, d!
I’m going to grow a
tree using d, e, f!
I’m going to grow a
tree using b, c, d!
Y*
Y*
Y*
Y*
}Average
Y*
Feature Set:
a, b, c, d, e, f
Random Forest
Random forest adjusts
overfit models
Here, we are still using a subset of the
data, but instead of randomly selecting a
number of observations, we randomly
select some number of features.
Random forest helps solve the problem of
overfitting.
Note that we can still calculate an accuracy score using OOB
Performance
Improving on
bagging
I’m going to grow a
tree using a, b, c!
I’m going to grow a
tree using a, e, d!
I’m going to grow a
tree using d, e, f!
I’m going to grow a
tree using b, c, d!
Set:
a, b, c, d, e, f
Finally, boosting is a procedure that
iteratively learns by combining many weak
classifiers to produce a powerful
committee.
Bootstrapping Bagging Random Forest Boosting
IMPORTANT NOTE: Boosting is one of the most powerful learning ideas
introduced in the last 20 years. It sounds similar to but is fundamentally
different from bagging and other committee-based approaches.
Boosting also creates subsets of training
data using bootstrap, but each tree learns
from the previous trees: that is, each tree
is not random.
Source: Carnegie Mellon University,
http://www.cs.cmu.edu/~guestrin/Class/10701-S06/Sli
des/decisiontrees-boosting.pdf Our model gets “brighter”
Y*
Unlike random forest, each tree is not
random in boosting
Boosting
How does the model learn?
Boosting uses many weak classifiers to
make a single strong classifier. A weak
classifier is defined as those whose error
rates is only slightly better than random
guessing.
Boosting sequentially applies weak
classification algorithms to repeatedly
modified versions of the data.
How is the data modified?
Our Model gets “brighter”
Y*
Boosting
Combining weak classifiers = one strong
classifier
Each prediction is combined through a
weighted majority vote to produce the final
prediction.
For each iteration, the algorithm weights
higher observations that were classified
incorrectly. This forces the algorithm to
concentrate on training observations that
were classified incorrectly in previous
iterations.
Our Model gets “brighter”
Y*
Boosting
Let’s go through each step of the algorithm
Boosting forces the model to focus in on
hard-to-classify observations
1. Use the whole data set to train a model to
produce Y*
2. Evaluate performance (true Y - Y*)
3. Create training set #2 including
observations that were incorrectly
classified
4. Repeat steps 2-3
Results in low model error, but there is risk of
overfitting
Source:
https://www.analyticsvidhya.com/blog/2015/09/questions-ensemble-
modeling/ Our Model gets “brighter”
Y*
Boosting
Let’s go through step by step:
We’ve covered a lot! By now, you have an arsenal
of supervised learning algorithms to apply in
many situations.
In the next module, we will look at unsupervised
algorithms and what they can tell us.
Bootstrapping Bagging Random Forest Boosting
End of theory
✓ Ensemble approaches
✓ Bootstrap
✓ Bagging
✓ Random forest
✓ Boosting
Module Checklist:
You are on fire! Go straight to the
next module here.
Need to slow down and digest? Take a
minute to write us an email about
what you thought about the course. All
feedback small or large welcome!
Email: sara@deltanalytics.org
Congrats! You finished
module 6
Find out more about
Delta’s machine
learning for good
mission here.

Más contenido relacionado

La actualidad más candente

Machine Learning Algorithms | Machine Learning Tutorial | Data Science Algori...
Machine Learning Algorithms | Machine Learning Tutorial | Data Science Algori...Machine Learning Algorithms | Machine Learning Tutorial | Data Science Algori...
Machine Learning Algorithms | Machine Learning Tutorial | Data Science Algori...
Simplilearn
 
Alleviating Privacy Attacks Using Causal Models
Alleviating Privacy Attacks Using Causal ModelsAlleviating Privacy Attacks Using Causal Models
Alleviating Privacy Attacks Using Causal Models
Amit Sharma
 
Lecture 7
Lecture 7Lecture 7
Lecture 7
butest
 
MachineLearning.ppt
MachineLearning.pptMachineLearning.ppt
MachineLearning.ppt
butest
 
Topic_6
Topic_6Topic_6
Topic_6
butest
 

La actualidad más candente (18)

Module 1 introduction to machine learning
Module 1  introduction to machine learningModule 1  introduction to machine learning
Module 1 introduction to machine learning
 
Machine Learning Algorithms | Machine Learning Tutorial | Data Science Algori...
Machine Learning Algorithms | Machine Learning Tutorial | Data Science Algori...Machine Learning Algorithms | Machine Learning Tutorial | Data Science Algori...
Machine Learning Algorithms | Machine Learning Tutorial | Data Science Algori...
 
Alleviating Privacy Attacks Using Causal Models
Alleviating Privacy Attacks Using Causal ModelsAlleviating Privacy Attacks Using Causal Models
Alleviating Privacy Attacks Using Causal Models
 
Machine Learning Unit 2 Semester 3 MSc IT Part 2 Mumbai University
Machine Learning Unit 2 Semester 3  MSc IT Part 2 Mumbai UniversityMachine Learning Unit 2 Semester 3  MSc IT Part 2 Mumbai University
Machine Learning Unit 2 Semester 3 MSc IT Part 2 Mumbai University
 
Lecture 7
Lecture 7Lecture 7
Lecture 7
 
MachineLearning.ppt
MachineLearning.pptMachineLearning.ppt
MachineLearning.ppt
 
Machine learning - session 3
Machine learning - session 3Machine learning - session 3
Machine learning - session 3
 
Introduction to machine learning and deep learning
Introduction to machine learning and deep learningIntroduction to machine learning and deep learning
Introduction to machine learning and deep learning
 
Causal Inference in Data Science and Machine Learning
Causal Inference in Data Science and Machine LearningCausal Inference in Data Science and Machine Learning
Causal Inference in Data Science and Machine Learning
 
Lecture 9: Machine Learning in Practice (2)
Lecture 9: Machine Learning in Practice (2)Lecture 9: Machine Learning in Practice (2)
Lecture 9: Machine Learning in Practice (2)
 
Learning to learn Model Behavior: How to use "human-in-the-loop" to explain d...
Learning to learn Model Behavior: How to use "human-in-the-loop" to explain d...Learning to learn Model Behavior: How to use "human-in-the-loop" to explain d...
Learning to learn Model Behavior: How to use "human-in-the-loop" to explain d...
 
Chapter 05 k nn
Chapter 05 k nnChapter 05 k nn
Chapter 05 k nn
 
M08 BiasVarianceTradeoff
M08 BiasVarianceTradeoffM08 BiasVarianceTradeoff
M08 BiasVarianceTradeoff
 
Topic_6
Topic_6Topic_6
Topic_6
 
Machine learning session6(decision trees random forrest)
Machine learning   session6(decision trees random forrest)Machine learning   session6(decision trees random forrest)
Machine learning session6(decision trees random forrest)
 
Explainable AI
Explainable AIExplainable AI
Explainable AI
 
Machine Learning and Causal Inference
Machine Learning and Causal InferenceMachine Learning and Causal Inference
Machine Learning and Causal Inference
 
Data Science - Part XI - Text Analytics
Data Science - Part XI - Text AnalyticsData Science - Part XI - Text Analytics
Data Science - Part XI - Text Analytics
 

Similar a Module 6: Ensemble Algorithms

3_learning.ppt
3_learning.ppt3_learning.ppt
3_learning.ppt
butest
 
The lengths of pregnancies are normally distributed with mean µ = .docx
The lengths of pregnancies are normally distributed with mean µ = .docxThe lengths of pregnancies are normally distributed with mean µ = .docx
The lengths of pregnancies are normally distributed with mean µ = .docx
oreo10
 
Computational Biology, Part 4 Protein Coding Regions
Computational Biology, Part 4 Protein Coding RegionsComputational Biology, Part 4 Protein Coding Regions
Computational Biology, Part 4 Protein Coding Regions
butest
 
John Maxwell, Data Scientist, Nordstrom at MLconf Seattle 2017
John Maxwell, Data Scientist, Nordstrom at MLconf Seattle 2017 John Maxwell, Data Scientist, Nordstrom at MLconf Seattle 2017
John Maxwell, Data Scientist, Nordstrom at MLconf Seattle 2017
MLconf
 
A General Framework for Accurate and Fast Regression by Data Summarization in...
A General Framework for Accurate and Fast Regression by Data Summarization in...A General Framework for Accurate and Fast Regression by Data Summarization in...
A General Framework for Accurate and Fast Regression by Data Summarization in...
Yao Wu
 
Machine Learning: Decision Trees Chapter 18.1-18.3
Machine Learning: Decision Trees Chapter 18.1-18.3Machine Learning: Decision Trees Chapter 18.1-18.3
Machine Learning: Decision Trees Chapter 18.1-18.3
butest
 

Similar a Module 6: Ensemble Algorithms (20)

Ensemble Learning and Random Forests
Ensemble Learning and Random ForestsEnsemble Learning and Random Forests
Ensemble Learning and Random Forests
 
It's Not Magic - Explaining classification algorithms
It's Not Magic - Explaining classification algorithmsIt's Not Magic - Explaining classification algorithms
It's Not Magic - Explaining classification algorithms
 
Deep Learning: Introduction & Chapter 5 Machine Learning Basics
Deep Learning: Introduction & Chapter 5 Machine Learning BasicsDeep Learning: Introduction & Chapter 5 Machine Learning Basics
Deep Learning: Introduction & Chapter 5 Machine Learning Basics
 
Introduction to Machine Learning concepts
Introduction to Machine Learning conceptsIntroduction to Machine Learning concepts
Introduction to Machine Learning concepts
 
Machine Learning using biased data
Machine Learning using biased dataMachine Learning using biased data
Machine Learning using biased data
 
3_learning.ppt
3_learning.ppt3_learning.ppt
3_learning.ppt
 
Random Forest and KNN is fun
Random Forest and KNN is funRandom Forest and KNN is fun
Random Forest and KNN is fun
 
Lecture4.pptx
Lecture4.pptxLecture4.pptx
Lecture4.pptx
 
Week8 finalexamlivelecture 2010june
Week8 finalexamlivelecture 2010juneWeek8 finalexamlivelecture 2010june
Week8 finalexamlivelecture 2010june
 
The lengths of pregnancies are normally distributed with mean µ = .docx
The lengths of pregnancies are normally distributed with mean µ = .docxThe lengths of pregnancies are normally distributed with mean µ = .docx
The lengths of pregnancies are normally distributed with mean µ = .docx
 
SMOTE and K-Fold Cross Validation-Presentation.pptx
SMOTE and K-Fold Cross Validation-Presentation.pptxSMOTE and K-Fold Cross Validation-Presentation.pptx
SMOTE and K-Fold Cross Validation-Presentation.pptx
 
BaggingBoosting.pdf
BaggingBoosting.pdfBaggingBoosting.pdf
BaggingBoosting.pdf
 
Computational Biology, Part 4 Protein Coding Regions
Computational Biology, Part 4 Protein Coding RegionsComputational Biology, Part 4 Protein Coding Regions
Computational Biology, Part 4 Protein Coding Regions
 
Decision tree and ensemble
Decision tree and ensembleDecision tree and ensemble
Decision tree and ensemble
 
John Maxwell, Data Scientist, Nordstrom at MLconf Seattle 2017
John Maxwell, Data Scientist, Nordstrom at MLconf Seattle 2017 John Maxwell, Data Scientist, Nordstrom at MLconf Seattle 2017
John Maxwell, Data Scientist, Nordstrom at MLconf Seattle 2017
 
Essentials of machine learning algorithms
Essentials of machine learning algorithmsEssentials of machine learning algorithms
Essentials of machine learning algorithms
 
Fast Distributed Online Classification
Fast Distributed Online Classification Fast Distributed Online Classification
Fast Distributed Online Classification
 
A General Framework for Accurate and Fast Regression by Data Summarization in...
A General Framework for Accurate and Fast Regression by Data Summarization in...A General Framework for Accurate and Fast Regression by Data Summarization in...
A General Framework for Accurate and Fast Regression by Data Summarization in...
 
Linear Regression Algorithm | Linear Regression in R | Data Science Training ...
Linear Regression Algorithm | Linear Regression in R | Data Science Training ...Linear Regression Algorithm | Linear Regression in R | Data Science Training ...
Linear Regression Algorithm | Linear Regression in R | Data Science Training ...
 
Machine Learning: Decision Trees Chapter 18.1-18.3
Machine Learning: Decision Trees Chapter 18.1-18.3Machine Learning: Decision Trees Chapter 18.1-18.3
Machine Learning: Decision Trees Chapter 18.1-18.3
 

Último

Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
vu2urc
 

Último (20)

Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
Advantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessAdvantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your Business
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 

Module 6: Ensemble Algorithms

  • 2. This course content is being actively developed by Delta Analytics, a 501(c)3 Bay Area nonprofit that aims to empower communities to leverage their data for good. Please reach out with any questions or feedback to inquiry@deltanalytics.org. Find out more about our mission here. Delta Analytics builds technical capacity around the world.
  • 4. ❏ Ensemble approaches ❏ Bootstrap ❏ Bagging ❏ Random forest ❏ Boosting Module Checklist:
  • 5. What we’ve done: Exploratory Analysis Linear Regression (our first model) Decision Trees (our second model) Up next: Even more models! Where are we? Question Exploratory Analysis Modeling Phase Validation Phase Source: Udacity - Model Building and Validation Building intuition Expanding our toolkit }
  • 6. Recap: In this example, we predict Sam’s weekend activity using decision rules trained on historical weekend behavior. How much $$ do I have? Raining? Girlfriend? NY Concert! Clubbing!Walk in the park!Movie! Y Y NN Decision Tree Task Defining f(x) The decision tree f(x) predicts the value of a target variable by learning simple decision rules inferred from the data features. Our most important predictive feature is Sam’s budget. How do we know this? Because it is the root node. Source: Friedman, Hastie, and Tibshirani. The elements of statistical learning. Vol. 1. Springer, Berlin: Springer series in statistics, 2001.
  • 7. Decision Tree Task Recap: You are Sam’s weekend planner. What should she do this weekend? Let’s get this weekend started! You’ve run a decision tree for Sam, and now you’ve got a model. But does it work well? As always, we use our test data to check our model, before we tell him what to do with this weekend.
  • 8. Performance Ability to generalize to unseen data Our goal in evaluating performance is to find a sweet spot between overfitting and underfitting. Recall our discussion of over and underfitting in previous modules: Underfit Overfit Sweet spot Our most important goal is to build a model that will generalize well to unseen data.
  • 9. Performance How do we measure underfitting/ overfitting? Figuring out if you are overfitting or underfitting involves knowing how to compare you train to to your test results. Underfit Overfit Sweet spot Training R2 Relationship Test R2 Condition high > low high ~ high Sweet spot low ~ low low < high never happens Overfitting underfitting
  • 10. How much $$ do I have? Raining? Partner? < $50 Concert! Clubbing!Walk in the park!Movie! Y Y NN >= $50 n=83 n=19 n=50 n=33 n=3n=31 Let’s see how this works in practice. Firstly, we train our model f(x) using training data. n=30 n=104 train: 83 test: 21 Performance Model evaluation 1. Split data into train/test 2. Run model on train data 3. Test model on test data Model error = Train True Y - Train Y*
  • 11. How much $$ do I have? Raining? Partner? < $50 Concert! Clubbing!Walk in the park!Movie! Y Y NN 1. Split data into train/test 2. Run model on train data 3. Test model on test data Remember generalization error? Review Module 4! >= $50 Now, we use our f(x) developed using training data to score unseen test data. n=104 train: 83 test: 21 Performance Model evaluation Output: Test Y* Inputs (x’s) TrueY Test True Y Generalization error = Test True Y - TestY*
  • 12. The holdout set method is great - it lets us test our model on unseen data, the most important metric for any model. However, one potential problem arises: What if our test dataset, even though it was picked randomly, is unrepresentative of the data? E.g. We managed to pick the 21 weekends in Sam’s dataset where he had just broken up with his girlfriend, or failed a test, or fought with his friend, and ended up staying home. Then our test set would say that our model is awful and didn’t predict Y* accurately. There are some shortcomings associated with the hold out method as a way to do model evaluation. Performance Model evaluation n=104 train: 83 test: 21 We can do better...
  • 13. A powerful way to overcome any issues with a biased single holdout is to run the model many times. If a single run of your model is one expert opining on the data, an ensemble approach gathers a crowd of experts. Source: Fortmann-Roe, Accurately Measuring Model Prediction Error. http://scott.fortmann-roe.com/docs/MeasuringError.html VS. Our Model Our Model + model friends Performance Model evaluation
  • 14. Ensemble approaches 1. Bootstrapping 2. Bagging 3. Random forests Central concept: teamwork!
  • 15. ● Bootstrap aggregation, or taking the average of the predicted Y*s from bootstrapped samples ● Random forest is a bagging method ● We are able to calculate out-of-bag error instead of using test/train set Ensemble Models: model cheat sheet ● Method of repeated sampling with replacement Bootstrapping Bagging ● Iterative - each tree learns from the tree that was run last. ● The algorithm weights each training example by how incorrectly it was classified. Boosting
  • 16. Bootstrapping, bagging, random forests and boosting all leverage a crowd of experts. Bootstrapping Bagging Random Forest Boosting
  • 17. Bootstrapping is a resampling method that takes random samples with replacement from whole dataset. n=104 train: 83 test: 21 Instead of only using one holdout, we repeatedly construct different holdouts from the dataset. Bootstrapping Example of a single holdout split. Bootstrapping repeats this many, many times. We set the number of holdouts as a hyperparameter.
  • 18. Bagging is an implementation of bootstrapping: it involves taking the average of the random samples drawn by bootstrapping. Bootstrapping Bagging Random Forest Boosting
  • 19. We train multiple models on random subsets of the datasets and average the predictions. By averaging the predictions, any chance of unrepresentative training sets is reduced. Y* Y* Y* Y* Y* }Average Y* Bagging improves upon a single holdout by taking the average predicted Y* of boosted random samples. Bagging
  • 20. Which do you think does a better job of estimating true Y? Y* Y* Y* Y* Y* Average Y* n=104 train: 83 train: 83 train: 83 train: 83 train: 83 } train: 83 Y* Bagging Normal holdout vs. n=104
  • 21. In Sam’s case, we still have the problem unrepresentative train dataset. However, now that we’re taking different train sets and averaging them, the chance of an unrepresentative training set over-influencing the Y* is reduced. Y* Y* Y* Y* Y* }Average Y* train: 83 train: 83 train: 83 train: 83 train: 83 Bagging tends to always outperform a single holdout. Bagging n=104
  • 22. Out-of-Bag Score Another amazing benefit of using bagging algorithms is the out-of-bag score. The out-of-bag score is the error rate of observations not used in each decision tree. Source: https://www.quora.com/What-is-the-out-of-bag-error-in-Random-Forests Bagging Out-of-bag score Y* Y* Y* Y* Y* train: 83 train: 83 train: 83 train: 83 train: 83 n=104 test = 104-83 = 21 test = 21 test = 21 test = 21 test = 21
  • 23. Out-of-Bag Score The out-of-bag score is the error rate of observations not used in each decision tree. Why it matters: There is empirical evidence to show that the out-of-bag estimate is as accurate as using a test set of the same size as the training set. Therefore, using the out-of-bag error estimate removes the need for a set-aside test set. Source: Breiman, 1996 Bagging Out-of-bag score Y* Y* Y* Y* Y* train: 83 train: 83 train: 83 train: 83 train: 83 n=104 test = 104-83 = 21 test = 21 test = 21 test = 21 test = 21
  • 24. Out-of-Bag Score Out-of-bag score can be calculated for any bootstrap aggregation method, including: - Random forest - Bagging - Boosting Bagging Out-of-bag score Y* Y* Y* Y* Y* train: 83 train: 83 train: 83 train: 83 train: 83 n=104 test = 104-83 = 21 test = 21 test = 21 test = 21 test = 21 Is bagging perfect? What are some potential tradeoffs?
  • 25. One key trade off is that training and assessing the performance of every additional holdout costs us computational power and time. The computational cost is driven by the data sample size and number of holdouts. There are a few key limitations to bagging. Y* Y* Y* Y* Y* }Average Y* Bagging
  • 26. Subsets of the same data may split on the same features and result in very similar predictions. How much $$ do I have? Raining? Girlfriend? < $50 Concert! Clubbing!Walk in the park!Movie! Y Y NN >= $50 A key limitation of bagging is that it may yield correlated (or very similar) trees. Bagging How much $$ do I have? Raining? Girlfriend? < $50 Concert! Clubbing!Walk in the park!Movie! Y Y NN >= $50 How much $$ do I have? Raining? Girlfriend? < $50 Concert! Clubbing!Walk in the park!Movie! Y Y NN >= $50
  • 27. Many identical trees becomes an echo chamber of overfitted trees that repeatedly yields a similar Y* value, and repeatedly yields the same important features. This gives us false confidence in our results. Y* Y* Y* Y* Y* }Average Y* n=104 train: 83 train: 83 train: 83 train: 83 train: 83 We can do better .... You’re doing “great”! Budget is the best feature, believe me Correlated trees may give us false confidence since they repeatedly yield the same features. Bagging
  • 28. Random forest improves on bagging’s tendency to result in correlated trees. Bootstrapping Bagging Random Forest Boosting
  • 29. Random forest improves upon bagging by only considering a random subset of features. Source: https://dimensionless.in/introduction-to-random-forest/ Random forest is an implementation of bagging. It improves on bagging by de-correlating trees. At every split, it only considers a random subset of the features. I’m going to grow a tree using a, b, c! I’m going to grow a tree using a, e, d! I’m going to grow a tree using d, e, f! I’m going to grow a tree using b, c, d! Y* Y* Y* Y* }Average Y* Feature Set: a, b, c, d, e, f Random Forest
  • 30. Random forest adjusts overfit models Here, we are still using a subset of the data, but instead of randomly selecting a number of observations, we randomly select some number of features. Random forest helps solve the problem of overfitting. Note that we can still calculate an accuracy score using OOB Performance Improving on bagging I’m going to grow a tree using a, b, c! I’m going to grow a tree using a, e, d! I’m going to grow a tree using d, e, f! I’m going to grow a tree using b, c, d! Set: a, b, c, d, e, f
  • 31. Finally, boosting is a procedure that iteratively learns by combining many weak classifiers to produce a powerful committee. Bootstrapping Bagging Random Forest Boosting IMPORTANT NOTE: Boosting is one of the most powerful learning ideas introduced in the last 20 years. It sounds similar to but is fundamentally different from bagging and other committee-based approaches.
  • 32. Boosting also creates subsets of training data using bootstrap, but each tree learns from the previous trees: that is, each tree is not random. Source: Carnegie Mellon University, http://www.cs.cmu.edu/~guestrin/Class/10701-S06/Sli des/decisiontrees-boosting.pdf Our model gets “brighter” Y* Unlike random forest, each tree is not random in boosting Boosting How does the model learn?
  • 33. Boosting uses many weak classifiers to make a single strong classifier. A weak classifier is defined as those whose error rates is only slightly better than random guessing. Boosting sequentially applies weak classification algorithms to repeatedly modified versions of the data. How is the data modified? Our Model gets “brighter” Y* Boosting Combining weak classifiers = one strong classifier
  • 34. Each prediction is combined through a weighted majority vote to produce the final prediction. For each iteration, the algorithm weights higher observations that were classified incorrectly. This forces the algorithm to concentrate on training observations that were classified incorrectly in previous iterations. Our Model gets “brighter” Y* Boosting Let’s go through each step of the algorithm Boosting forces the model to focus in on hard-to-classify observations
  • 35. 1. Use the whole data set to train a model to produce Y* 2. Evaluate performance (true Y - Y*) 3. Create training set #2 including observations that were incorrectly classified 4. Repeat steps 2-3 Results in low model error, but there is risk of overfitting Source: https://www.analyticsvidhya.com/blog/2015/09/questions-ensemble- modeling/ Our Model gets “brighter” Y* Boosting Let’s go through step by step:
  • 36. We’ve covered a lot! By now, you have an arsenal of supervised learning algorithms to apply in many situations. In the next module, we will look at unsupervised algorithms and what they can tell us. Bootstrapping Bagging Random Forest Boosting
  • 38. ✓ Ensemble approaches ✓ Bootstrap ✓ Bagging ✓ Random forest ✓ Boosting Module Checklist:
  • 39. You are on fire! Go straight to the next module here. Need to slow down and digest? Take a minute to write us an email about what you thought about the course. All feedback small or large welcome! Email: sara@deltanalytics.org
  • 40. Congrats! You finished module 6 Find out more about Delta’s machine learning for good mission here.