Slide explaining the distinction between bagging and boosting while understanding the bias variance trade-off. Followed by some lesser known scope of supervised learning. understanding the effect of tree split metric in deciding feature importance. Then understanding the effect of threshold on classification accuracy. Additionally, how to adjust model threshold for classification in supervised learning.
Note: Limitation of Accuracy metric (baseline accuracy), alternative metrics, their use case and their advantage and limitations were briefly discussed.
2. Understanding
Bagging and
Boosting
Both are ensemble techniques,
where a set of weak learners are combined to create a strong learner
that obtains better performance than a single one.
Error = Bias + Variance
+ Noise
3. Bagging short for Bootstrap Aggregating
It’s a way to increase accuracy by Decreasing Variance
Done by
Generating additional dataset using combinations
with repetitions to produce multisets of same
cardinality/size as original dataset.
Example: Random Forest
Develops fully grown decision
trees (low bias high variance)
which are uncorrelated to
maximize the decrease in
variance.
Since cannot reduce bias
therefore req. large unpruned
trees.
4. Boosting
It’s a way to increase accuracy by Reducing Bias
2- step Process Done by
Develop averagely performing models over subsets of
the original data.
Boost these model performance by combining them
using a cost function (eg.majority vote).
Note: every subsets contains elements that were
misclassified or were close by the previous model.
Example: Gradient Boosted Tree
Develops shallow decision trees (high
bias low variance) aka weak larner.
Reduce error mainly by reducing bias
developing new learner taking into
account the previous learner
(Sequential).
10. Comparison
Both are ensemble methods to get N learners
from 1 learner…
… but, while they are built independently for
Bagging, Boosting tries to add new models that do
well where previous models fail.
Both generate several training data sets by
random sampling…
… but only Boosting determines weights for the data
to tip the scales in favor of the most difficult cases.
Both make the final decision by averaging the N
learners (or taking the majority of them)…
… but it is an equally weighted average for Bagging
and a weighted average for Boosting, more weight
to those with better performance on training data.
Both are good at reducing variance and provide
higher stability…
… but only Boosting tries to reduce bias. On the other
hand, Bagging may solve the overfitting problem,
while Boosting can increase it.
Similarities Differences
11. Exploring the Scope of Supervised
Learning in Current Setup
Areas where Supervised Learning can be useful
Feature Selection for Clustering
Evaluating Features
Increasing the Aggressiveness of the Current setup
Bringing New Rules Idea
17. Feature Selection/ Importance
Comparison b/w Important Feature by Random Forest & XGBoost
Reason for difference in Feature Importance b/w XGB & RF
Basically, when there are several correlated features, boosting will tend to choose one and use it in
several trees (if necessary). Other correlated features won t be used a lot (or not at all).
It makes sense as other correlated features can't help in the split process anymore -> they don't bring
new information regarding the already used feature. And the learning is done in a serial way.
Each tree of a Random forest is not built from the same features (there is a random selection of
features to use for each tree). Each correlated feature may have the chance to be selected in one of the
tree. Therefore, when you look at the whole model it has used all features. The learning is done in
parallel so each tree is not aware of what have been used for other trees.
Tree Growth XGB
When you grow too many trees, trees are starting to be look very similar (when there is no loss
remaining to learn). Therefore the dominant feature will be an even more important. Having shallow
trees reinforce this trend because there are few possible important features at the root of a tree (shared
features between trees are most of the time the one at the root of it). So your results are not surprising.
In this case, you may have interesting results with random selection of columns (rate around 0.8).
Decreasing ETA may also help (keep more loss to explain after each iteration).
25. Model Accuracy and Threshold Evaluation
Random Forest Criteria - Gini Index Random Forest Criteria - Entropy
Criteria Accuracy TN FP FN TP
Gini 94.800% 46968 22 2574 362
Entropy 94.788% 46967 23 2579 357
A A
A A BB
B B