Machine learning can be used to predict whether a user will purchase a book on an online book store. Features about the user, book, and user-book interactions can be generated and used in a machine learning model. A multi-stage modeling approach could first predict if a user will view a book, and then predict if they will purchase it, with the predicted view probability as an additional feature. Decision trees, logistic regression, or other classification algorithms could be used to build models at each stage. This approach aims to leverage user data to provide personalized book recommendations.
2. Outline
• Introduction
• Types of Learning
• Machine Learning Models
• Definition
• Regularization
• Metrics
• Feature Engineering
• Real World Examples
3. What is
Machine
Learning?
• “Learning is any process by which a system
improves performance from experience.” -
Herbert Simon
• Machine learning is training computers to
effectively achieve a performance criterion
using examples or historical data.
4. Why?
Machine Learning is used when
Human expertise is unavailable (space expeditions).
Human expertise is not explicable (speech translation).
Information need to be personalized (education, medicine).
Domains with huge amount of data.
5. Applications
• Education --- Developing Learning Paths
for Students
• Healthcare --- Personalized Medicine
• Retail --- Product recommendations
• Web --- Search
• Manufacturing --- robotics, control
• Finance – fraud detection, asset
management
• HR --- people analytics
• Medical --- drug discovery, automated
diagnosis
• ………..
6. Types of Learning
Supervised Unsupervised
Examples or training data is available
o Human annotations, user Interactions
Data contains features correlated with the
desired outcome
A model is learned from the examples
Goal of the model is to predict future
behavior
Direct examples are not available
Data contains features correlated,
outcome may not be defined.
It is possible to create clusters
correlated with the learning objective
based on patterns in the data
Learning Objective ---
7. Types of Learning
Supervised Unsupervised
Regression
Linear
Decision Trees
Classification
Logistic Regression
Naïve Bayes
SVM
Decision Trees – RF, GBDT
Clustering --- kmeans
Similarity based results
Transfer Learning
Models ---
8. Linear Regression
Linear regression was developed in the field of statistics and is studied as a model for understanding the relationship
between input and output numerical variables, but has been borrowed by machine learning. It is both a statistical
algorithm and a machine learning algorithm.
Linear Regression | simple linear regression | Ordinary Least Squares | multiple linear regression
• The dependent variable Y has a linear relationship to the independent variable X
• For each value of X, the probability distribution of Y has the same standard deviation σ.
• For any given value of X, The Y values are independent, as indicated by a random pattern on the residual plot.
• The Y values are roughly normally distributed (i.e., symmetric and unimodal).
Example: Sales Prediction company’s advertising spend on radio, TV, and newspapers.
10. Standard Deviation -- single attribute
•Standard Deviation (S) is for tree building (branching).
•Coefficient of Deviation (CV) is used to decide when to stop
branching. We can use Count (n) as well.
•Average (Avg) is the value in the leaf nodes.
12. Standard Deviation Reduction
The dataset is then split on the different attributes. The standard deviation for each branch is calculated. The
resulting standard deviation is subtracted from the standard deviation before the split. The result is the standard
deviation reduction.
13. Divide the Dataset
The dataset is divided based on the values of
the selected attribute. This process is run
recursively on the non-leaf branches, until all
data is processed.
14. Overcast does not need more splitting
"Overcast" subset does not need any further splitting because
its CV (8%) is less than the threshold (10%). The related leaf
node gets the average of the "Overcast" subset.
18. Pros and Cons
1. Simple and easy to understand: Decision Tree looks like
simple if-else statements which are very easy to understand.
2. Decision Tree can handle both continuous and categorical
variables.
3. No feature scaling required: No feature scaling
(standardization and normalization) required in case of Decision
Tree as it uses rule based approach instead of distance
calculation.
4. Handles non-linear parameters efficiently: Non linear
parameters don't affect the performance of a Decision Tree
5. Decision Tree can automatically handle missing values.
6. Decision Tree is usually robust to outliers and can handle
them automatically.
1. Overfitting: This is the main problem of the
Decision Tree
2. High variance: Due to the overfitting, there are
very high chances of high variance in the output
which leads to many errors in the final estimation
and shows high inaccuracy in the results. In order to
achieve zero bias (overfitting), it leads to high
variance.
3. Unstable: Adding a new data point can lead to
re-generation of the overall tree and all nodes need
to be recalculated and recreated.
4. Affected by noise: Little bit of noise can make it
unstable which leads to wrong predictions.
19. Applications
• Trendline --- A trend line represents a trend, the long-term movement in time series data after other
components have been accounted for. It tells whether a particular data set (say GDP, oil prices or stock prices) have
increased or decreased over the period of time.
• Epidemiology --- Early evidence relating tobacco smoking to mortality and morbidity came from observational
studies employing regression analysis. In order to reduce spurious correlations when analyzing observational data,
researchers usually include several variables in their regression models in addition to the variable of primary interest.
• Finance --- The capital asset pricing model uses linear regression as well as the concept of beta for analyzing and
quantifying the systematic risk of an investment. This comes directly from the beta coefficient of the linear regression
model that relates the return on the investment to the return on all risky assets.
• Economics --- Linear regression is the predominant empirical tool in economics. For example, it is used to
predict consumption spending,[20] fixed investment spending, inventory investment, purchases of a
country's exports,[21] spending on imports,[21] the demand to hold liquid assets,[22] labor demand,[23] and labor
supply.[23]
https://en.wikipedia.org/wiki/Linear_regression
20. Classification
http://cs229.stanford.edu/notes2020spring/cs229-notes1.pdf
Values of Y, i.e the response can take discrete values
o Binary Classification – response can belong to two
classes (0,1)
Rating – thumbs up, thumbs down.
o Multi-class classification – there can be n classes
[1, …. n]
Movie/restaurant rating can range from
[1,2,3,4,5]
o Multi-class , multi – label classification.
Example –
Concept space in Probability has many
concepts (classes) – [Probability, Bayes
Theorem, DiscretePDF – Binomial, Poisson,
Continuous PDF – normal, exponential, …]
A question will typically belong to multiple
classes.
21. Classification Pipeline
Data Cleaning Collecting Training Data Model Building
Offline
SME
• Reduces noise
• Ensures quality
• Improves overall performance
• Training Data Collection / Examples
of classes that we are trying to model
• Model performance is directly
correlated with quality of training
data
Model Evaluation
• Model selection
• Architecture
• Parameter Tuning
Users
Online
21
22. Logistic Regression -- Binary Classification
Logistic Function or Sigmoid function
https://sebastianraschka.com/faq/docs/logistic-why-sigmoid.html -- discussion on the logit function
Interpretation: Output is the probability that y=1 given x.
Output > 0.5 represents y =1 and output < 0.5 represents y=0
24. Multinomial Logistic / Softmax
Sigmoid gets replaced by the softmax function.
This applies when there are 1,…k classes.
25. Regularization --- Linear
/Logistic
• Penalty against complexity
o model does not up
"peculiarities," "noise,"
or "imagines a pattern
where there is none.”
• Helps with generalizing to
new data sets.
• Addition of a bias to the
model when suffers from
high variance or overfitting
L2 regularization
L1 regularization
26. Data --- Train,
Test and
Validation
Train – 90% , Test –5 %, Validation –5% ---- the
percentages can vary depending on the total size of
the dataset.
• Train --- data used to train the model
• Validation --- data that the model has not seen
but is used for parameter tuning, i.e the model is
optimized based on performance on this set.
• Test --- model has not seen this data, this data is
not used in any part of the computation. Final
performance metrics are reported on this data.
27. Bias – Variance Tradeoff
BIAS — When we model in a very simple, novice way, for example
just a single linear equation prediction for an actual complex model.
Of course because of this Model becomes Under fit and miss out
various important insights and relations between variables.
VARIANCE — On the other hand, when we become over concern for
a simple given data and fit a model in a very complicated way, it
results in Over fit. So each noise and outlier will be considered as
valid data point and modelled accordingly.
28. Decision Tree Classifications
Advantage ---
• Results are interpretable
• Works for both numerical and categorical data
• Does not require feature transformations (example
--- normalization, scaling)
• Robust to Multicollinearity – correlated features.
Single decision are rarely used in practice
Disadvantage ---
• Unstable --- small changes in data can lead to large
structural changes in the decision tree
• Prone to overfitting
• Easily becomes complex
30. Ensemble
Tecniques ---
Random Forest
Many trees are better than one!
N slightly differently trained
decision trees and merges
them together to get more
accurate and stable
predictions.
31. Regularization
• Limit tree depth.
• Pruning
• Penalize selection of new
features over features that
have similar gain
• Set stricter stopping criterion
on when to split a node
further (e.g. min gain, number
of samples etc.)
Feature Importance
A feature’s importance score measures
the contribution from the feature. It is
based on the impurity reduction of the
class due to the feature.
Bias towards features with more
categories
It is important to compute the correlation
with accuracy.
33. Clustering
• Hierarchical clustering
• K-means clustering
• K-NN (k nearest neighbors)
Clustering is a technique that
finds groups (clusters) in the
data that have similar patterns.
35. Similarity based
Recommendations
• Text data – news article,
study material , books, …
• For every piece of content
compute the distance to
all every other content in
the cluster --- save the top
–n in a database
• When a user views any
content surface the top-n
(typically 5) other similar
items
36. Applications
• Recommendations based on Text
Similarity
• Customer Segmentation
• Content Categorization
• As a pre-analysis for supervised
learning
38. Regression
R² Error: The metric helps us to compare our current model
with a constant baseline and tells us how much our model is
better. The constant baseline is chosen by taking the mean of
the data and drawing a line at the mean.
Adjusted R²: Adjusted R² depicts the same meaning as R² but
is an improvement of it. R² suffers from the problem that the
scores improve on increasing terms even though the model is
not improving which may misguide the researcher. Adjusted
R² is always lower than R² as it adjusts for the increasing
predictors and only shows improvement if there is a real
improvement
39. Classification
True Positive --- Number of observations that model correctly
predicts the positive class
False Positive --- Number of observations where model
incorrectly predicts the positive class.
False Negatives --- Number of observations where model
incorrectly predicts the negative class.
True Negatives --- Number of observations where model
correctly predicts the negative class
https://en.wikipedia.org/wiki/Precision_and_recall
41. Thresholding --- Coverage
In a binary classification if you choose randomly the probability of belonging to a class is 0.5
0.3
0.7
It is possible improve the percentage of
correct results at the cost of coverage.
42. Confusion
Matrix
Good for checking where your
model is incorrect
For multi-class classification it
reflects which classes are
correlated
44. Mean Reciprocal Rank
For implicit dataset, the relevance score is either 0 or 1, for items not bought or bought (not clicked or clicked, etc.).
45. Mean Average Precision
Precision@k would be the fraction of relevant items in the top k recommendations, and recall@k would be the
coverage of relevant times in the top k.
46. Normalized Cumulative Discounted Gain
(NDCG)
Gain
Gain for an item is essentially the
same as the relevance score
Cumulative Gain
Cumulative Gain is defined as the sum of
gains up to a position k
This measure is independent of ordering.
Discounted
Cumulative Gain
By diving the gain by its rank,
we sort of push the algorithm
to place highly relevant items to
the top to achieve the best DCG
score.
DCG score adds up with the
length of recommendation list.
Ideal Discounted
Cumulative Gain
Nirmalizing to incorporate lists of
different lengths
Between 0,1
48. Imputation or missing values
https://towardsdatascience.com/feature-engineering-for-machine-learning-3a5e293a5114#1c08
Drop rows with missing values
Cons: Reduces training data. In case of multiple features values may only be missing in a subset of features.
Replace with a reasonable value – median is common
Cons: There are assumptions used when values are missing which may not be correct.
Categorical Imputation --- replace with the most common value.
Cons: Assumptions
51. Log Transform
• It helps to handle skewed data and after transformation, the
distribution becomes more approximate to normal.
• In most of the cases the magnitude order of the data changes
within the range of the data. For instance, the difference
between ages 15 and 20 is not equal to the ages 65 and 70. In
terms of years, yes, they are identical, but for all other
aspects, 5 years of difference in young ages mean a higher
magnitude difference. This type of data comes from a
multiplicative process and log transform normalizes the
magnitude differences like that.
• It also decreases the effect of the outliers, due to the
normalization of magnitude differences and the model become
53. Scaling -- all features have the same range
The continuous features become identical in terms of the range, after
a scaling process. This process is not mandatory for many algorithms,
but it might be still nice to apply. However, the algorithms based
on distance calculations such as k-NN or k-Means need to have scaled
continuous features as model input.
57. Use Case --- Predict will a
user buy a book?
Online Book Store
58. Generating Features
Book Features User Features Book-User Features
• Tags --- genre, subject, …
• Level – Beginner, Intermediate
Advanced
• Popularity score –
1. exponential decay on clicks
2. Create time bound scores,
such number of views in the
last 7 days, last 14 days
• Price
• Length of the book
• …
• Tags --- genre, subject, …
• Level – Beginner, Intermediate
Advanced
Derived from user interactions on the
site.
• View Score
1. Exponential decay clicks on
books
2. Time bound --- number of
views in past 7/14 days
• Price category
• Time of day – categorical
• …
• Number of Views in past 14/30
days
• Already bought
• Number of views from the
same author
• …
59. Response Variable & Modeling
• If the site is not super active you might not have enough data on purchases
• Multi-stage model
• Stage 1 : response variable views – i.e will the user view/click on this
book in the next 3 days?
• Stage 2: response variable purchase – will the user purchase this book
in the next 3 days. The probability of view will be a feature in the Stage
2 model.
Modeling: We have a mix of categorical and numerical features --- Random Forest
60. How to start projects in machine learning?
• Kaggle competitions ---
• Make sure to solve the ML problems for concept development
before competing
61. How to start projects in machine learning?
• Self guided workshops/projects ---
lets say you have data from Zomato
• Restaurant recommendation --
user based, content similarity
based.
• Restaurant tags from reviews.
• Sentiment analysis from reviews.
64. K-Nearest Neighbors
K-Nearest Neighbor is a classification
algorithm that leverages observations
close to the target point to decide which
class it belongs to
There are two parts of the algorithm: first, how to measure
“close”; second, how many close observations (K) we
need.
https://github.com/spotify/annoy
This is how a child learns to navigate the world, they will tocuh a hot surface , get hurt and not do that again.
build in the field of statistics, widely used in machine learning.
third point errors are random
normality, independence and constant variance of errors.
Because the number of data points for both branches (FALSE and TRUE) is equal or less than 3 we stop further branching and assign the average of each branch to the related leaf node.
"rainy" branch has an CV (22%) which is more than the threshold (10%). This branch needs further splitting. We select "Windy" as the best best node because it has the largest SDR.
When the number of instances is more than one at a leaf node we calculate the average as the final value for the target.
. In order to fit the data (even noisy data), it keeps generating new nodes and ultimately the tree becomes too complex to interpret. In this way, it loses its generalization capabilities. It performs very well on the trained data but starts making a lot of mistakes on the unseen data.
For example, in a regression model in which cigarette smoking is the independent variable of primary interest and the dependent variable is lifespan measured in years, researchers might include education and income as additional independent variables, to ensure that any observed effect of smoking on lifespan is not due to those other socio-economic factors.
if y=1 and your predicts 0 you are penalized heavily.
Conversely if y=0 and your model 1 the penalization is infinite.
L2 has one very important advantage to L1, and that is invariance to rotation and scale.
The reason for using L1 norm to find a sparse solution is due to its special shape. It has spikes that happen to be at sparse points. Using it to touch the solution surface will very likely to find a touch point on a spike tip and thus a sparse solution.
L1 regularization can address the multicollinearity problem by constraining the coefficient norm and pinning some coefficient values to 0.
L1-norm has the property of producing many coefficients with zero values or very small values with few large coefficients. Computational efficiency. L1-norm does not have an analytical solution, but L2-norm does. This allows the L2-norm solutions to be calculated computationally efficiently.
Simple decision tree
Highest Information gain
Lowest gini impurity
higher Chi-Square value
In a binary classification if you choose randomly the probability of belonging to a class is 0.5
Gain: By swapping the relative order of any two items, the CG would be unaffected. This is problematic when ranking order is important. For example, on Google Search results, you would obviously not like placing the most relevant web page at the bottom.
Having features on a similar scale can help the gradient descent converge more quickly towards the minima. Scaling prevents higher weight being given to features with higher magnitudes.
This transformation does not change the distribution of the feature and due to the decreased standard deviations, the effects of the outliers increases. Therefore, before normalization, it is recommended to handle the outliers.
Standardization is another scaling technique where the values are centered around the mean with a unit standard deviation. This means that the mean of the attribute becomes zero and the resultant distribution has a unit standard deviation.
can be helpful in cases where the data follows a Gaussian distribution. However, this does not have to be necessarily true. standardization does not have a bounding range. So, even if you have outliers in your data, they will not be affected by standardization.
https://www.analyticsvidhya.com/blog/2020/04/feature-scaling-machine-learning-normalization-standardization/