SlideShare a Scribd company logo
1 of 39
Credibility: Evaluating what’s Been Learned
Training and Testing We measure the success of a classification procedure by using error rates (or equivalent success rates) Measuring success rate using training set is highly optimistic The error rate on training set is called resubstitution error We have a separate test set for calculating success error Test set should be independent of the training set Also some time to improve our classification technique we use a validation set When we hold out some part of training set for testing (which is now not used for training), this process is called holdout procedure
Predicting performance Expected success rate = 100 – error rate (If error rate is also in percentage) We want the true success rate Calculation of true success rate Suppose we have expected success rate(f) = s/n, where s is the number of success out of a total n instances For large value of n, f follows a normal distribution Now we will predict the true success rate (p) based on the  confidence percentage we want  For example say our f = 75%, then p will lie in [73.2%,76.7%] with 80% confidence
Predicting performance Now using properties of statistics we know that the mean of f is p and the variance is p(1-p)/n To use normal distribution we will have to make the mean of f = 0 and standard deviation = 1  So suppose our confidence = c% and we want to calculate p We will use the two tailed property of normal distribution And also that the are covered by normal distribution is taken as 100% so the are we will leave is 100 - c
Predicting performance Finally after all the manipulations we have ,true success rate as: Here,                         p -> true success rate                         f - > expected success rate                         N -> Number of instances                          Z -> Factor derived from a normal distribution table using the  100-c measure
Cross validation We use cross validation when amount of data is small and we need to have independent training and test set from it It is important that each class is represented in its actual proportions in the training and test set: Stratification  An important cross validation technique is stratified 10 fold cross validation, where the instance set is divided into 10 folds We have 10 iterations with taking a different single fold for testing and the rest 9 folds for training, averaging the error of the 10 iterations Problem: Computationally intensive
Other estimates Leave-one-out:Steps One instance is left for testing and the rest are used for training This is iterated for all the instances and the errors are averaged Leave-one-out:Advantage We use larger training sets Leave-one-out:Disadvantage Computationally intensive Cannot be stratified
Other estimates 0.632 Bootstrap Dataset of n samples is sampled n times, with replacements, to give another dataset with n instances There will be some repeated instances in the second set Here error is defined as: e = 0.632x(error in test instances) + 0.368x(error in training instances)
Comparing data mining methods Till now we were dealing with performance prediction Now we will look at methods to compare algorithms, to see which one did better We cant directly use Error rate to predict which algorithm is better as the error rate might have been calculated on different data sets So to compare algorithms we need some statistical tests We use Student’s  t- test to do this. This test help us to figure out if the mean error of two algorithm are different or not for a given confidence level
Comparing data mining methods We will use paired t-test which is a slight modification of student’s t-test Paired t-test Suppose we have unlimited data, do the following: Find k data sets from the unlimited data we have Use cross validation with each technique to get the respective outcomes: x1, x2, x3,….,xk and y1,y2,y3,……,yk mx = mean of x values and similarly my di = xi – yi Using t-statistic:
Comparing data mining methods Based on the value of k we get a degree of freedom, which enables us to figure out a z for a particular confidence value If  t <= (-z)   or  t >= (z) then, the two means differ significantly  In case t = 0 then they don’t differ, we call this null hypothesis
Predicting Probabilities Till now we were considering a scheme which when applied, results in either a correct or an incorrect prediction. This is called 0 – loss function Now we will deal with the success incase of algorithms that outputs probability distribution for e.g. Naïve Bayes
Predicting Probabilities Quadratic loss function: For a single instance there are k out comes or classes Probability vector: p1,p2,….,pk The actual out come vector is: a1,a2,a3,…..ak (where the actual outcome will be 1, rest all 0) We have to minimize the quadratic loss function given by: The minimum will be achieved when the probability vector is the true probability vector
Predicting Probabilities Informational loss function: Given by: –log(pi) Minimum is again reached at true probabilities Differences between Quadratic loss and Informational loss While quadratic loss takes all probabilities under consideration, Informational loss is based only on the class probability  While quadratic loss is bounded as its maximum output is 2, Informational loss is unbounded as it can output values up to infinity
Counting the cost Different outcomes might have different cost For example in loan decision, the cost of lending to a defaulter is far greater that the lost-business cost of  refusing a loan to a non defaulter Suppose we have two class prediction. Outcomes can be:
Counting the cost True positive rate: TP/(TP+FN) False positive rate: FP/(FP+TN) Overall success rate: Number of correct classification / Total Number of classification Error rate = 1 – success rate In multiclass case we have a confusion matrix like (actual and a random one):
Counting the cost These are the actual and the random outcome of a three class problem The diagonal represents the successful cases Kappa statistic = (D-observed  -  D-actual) / (D-perfect  -  D-actual) Here kappa statistic = (140 – 82)/(200-82) = 49.2% Kappa is used to measure the agreement between predicted and observed categorizations of a dataset, while correcting for agreements that occurs by chance Does not take cost into account
Classification with costs Example Cost matrices (just gives us the number of errors): Success rate is measured by average cost per prediction We try to minimize the costs Expected costs: dot products of vectors of class probabilities and appropriate column in cost matrix
Classification with costs Steps to take cost into consideration while testing: First use a learning method to get the probability vector (like Naïve Bayes)  Now multiple the probability vector to each column of a cost matrix one by one so as to get the cost for each class/column Select the class with the minimum(or maximum!!) cost
Cost sensitive learning Till now we included the cost factor during evaluation We will incorporate costs into the learning phase of a method We can change the ratio of instances in the training set so as to take care of costs For example we can do replication of a instances of particular class so that our learning method will give us a model with less errors of that class
Lift Charts In practice, costs are rarely known In marketing terminology the response rate is referred to as the lift factor We compare probable scenarios to make decisions A lift chart allows visual comparison Example: promotional mail out to 1,000,000 households Mail to all: 0.1%response (1000) Some data mining tool identifies subset of 100, 000 of which 0.4% respond (400) A lift of 4
Lift Charts Steps to calculate lift factor: We decide a sample size Now we arrange our data in decreasing order of the predicted probability of a class (the one which we will base our lift factor on: positive class) We calculate: Sample success proportion = Number of positive instances / Sample size  Lift factor = Sample success proportion / Data success proportion We calculate lift factor for different sample size to get  Lift Charts
Lift Charts A hypothetical lift chart
Lift Charts In the lift chart we will like to stay towards the upper left corner The diagonal line is the curve for random samples without using sorted data Any good selection will keep the lift curve above the diagonal
ROC Curves Stands for receiver operating characteristic Difference to lift charts: Y axis showspercentage of true positive  X axis shows percentage of false positives in samples ROC is a jagged curve It can be smoothened out by cross validation
ROC Curves A ROC curve
ROC Curves Ways to generate cost curves (Consider the previous diagram for reference) First way: Get the probability distribution over different folds of data Sort the data in decreasing order of the probability of yes class Select a point on X-axis and for that number of no, get the number of yes for each probability distribution Average the number of yes from all the folds and plot it
ROC Curves Second way: Get the probability distribution over different folds of data Sort the data in decreasing order of the probability of yes class Select a point on X-axis and for that number of no, get the number of yes for each probability distribution Plot a ROC for each fold individually  Average all the ROCs
ROC Curves ROC curves for two schemes
ROC Curves In the previous ROC curves: For a small, focused sample, use method A For a large one, use method B In between, choose between A and B with appropriate probabilities
Recall – precision curves In case of a search query: Recall = number of documents retrieved that are relevant / total number of documents that are relevant Precision = number of documents retrieved that are relevant / total number of documents that are retrieved
A summary          Different measures used to evaluate the false positive versus the false negative tradeoff
Cost curves Cost curves plot expected costs directly Example for case with uniform costs (i.e. error):
Cost curves Example with costs:
Cost curves C[+|-]  is the cost of predicting + when the instance is – C[-|+]  is the cost of predicting - when the instance is +
Minimum Description Length Principle The description length is defined as: Space required to describe a theory + space required to describe the theory’s mistakes Theory  = Classifier and mistakes = errors on the training data We try to minimize the description length MDL theory is the one that compresses the data the most. I.e to compress a data set we generate a model and then store the model and its mistakes We need to compute: Size of the model Space needed to encode the error
Minimum Description Length Principle The 2nd  one is easy. Just use informational loss function For  1st  we need a method to encode the model L[T] = “length” of the theory L[E|T] = training set encoded wrt the theory
Minimum Description Length Principle MDL and clustering Description length of theory: bits needed to encode the clusters. E.g. cluster centers Description length of data given theory: encode cluster membership and position relative to cluster. E.g. distance to cluster centers Works if coding scheme uses less code space for small numbers than for large ones
Visit more self help tutorials Pick a tutorial of your choice and browse through it at your own pace. The tutorials section is free, self-guiding and will not involve any additional support. Visit us at www.dataminingtools.net

More Related Content

What's hot

The kolmogorov smirnov test
The kolmogorov smirnov testThe kolmogorov smirnov test
The kolmogorov smirnov test
Subhradeep Mitra
 
CÁC BÀI TOÁN KHAI PHÁ DỮ LIỆU VÀ ỨNG DỤNG CỦA KHAI PHÁ DỮ LIỆU.pdf
CÁC BÀI TOÁN KHAI PHÁ DỮ LIỆU VÀ ỨNG  DỤNG CỦA KHAI PHÁ DỮ LIỆU.pdfCÁC BÀI TOÁN KHAI PHÁ DỮ LIỆU VÀ ỨNG  DỤNG CỦA KHAI PHÁ DỮ LIỆU.pdf
CÁC BÀI TOÁN KHAI PHÁ DỮ LIỆU VÀ ỨNG DỤNG CỦA KHAI PHÁ DỮ LIỆU.pdf
Man_Ebook
 

What's hot (20)

CRISP-DM - Agile Approach To Data Mining Projects
CRISP-DM - Agile Approach To Data Mining ProjectsCRISP-DM - Agile Approach To Data Mining Projects
CRISP-DM - Agile Approach To Data Mining Projects
 
Inferential Statistics
Inferential StatisticsInferential Statistics
Inferential Statistics
 
Chapter - 6 Data Mining Concepts and Techniques 2nd Ed slides Han & Kamber
Chapter - 6 Data Mining Concepts and Techniques 2nd Ed slides Han & KamberChapter - 6 Data Mining Concepts and Techniques 2nd Ed slides Han & Kamber
Chapter - 6 Data Mining Concepts and Techniques 2nd Ed slides Han & Kamber
 
Machine Learning - Splitting Datasets
Machine Learning - Splitting DatasetsMachine Learning - Splitting Datasets
Machine Learning - Splitting Datasets
 
Đề tài: Thuật toán khai phá dữ liệu trong quản lý địa chỉ Internet
Đề tài: Thuật toán khai phá dữ liệu trong quản lý địa chỉ InternetĐề tài: Thuật toán khai phá dữ liệu trong quản lý địa chỉ Internet
Đề tài: Thuật toán khai phá dữ liệu trong quản lý địa chỉ Internet
 
Text classification with fast text elena_meetup_milano_27_june
Text classification with fast text elena_meetup_milano_27_juneText classification with fast text elena_meetup_milano_27_june
Text classification with fast text elena_meetup_milano_27_june
 
05 - Variabel Random dan Distribusi Peluang.pdf
05 - Variabel Random dan Distribusi Peluang.pdf05 - Variabel Random dan Distribusi Peluang.pdf
05 - Variabel Random dan Distribusi Peluang.pdf
 
statistical inference
statistical inference statistical inference
statistical inference
 
PLS - SEM.pdf
PLS - SEM.pdfPLS - SEM.pdf
PLS - SEM.pdf
 
Data Wrangling
Data WranglingData Wrangling
Data Wrangling
 
The kolmogorov smirnov test
The kolmogorov smirnov testThe kolmogorov smirnov test
The kolmogorov smirnov test
 
Ứng dụng khai phá dữ liệu xây dựng hệ hỗ trợ chẩn đoán y khoa
Ứng dụng khai phá dữ liệu xây dựng hệ hỗ trợ chẩn đoán y khoaỨng dụng khai phá dữ liệu xây dựng hệ hỗ trợ chẩn đoán y khoa
Ứng dụng khai phá dữ liệu xây dựng hệ hỗ trợ chẩn đoán y khoa
 
Rise of the Data Democracy
Rise of the Data DemocracyRise of the Data Democracy
Rise of the Data Democracy
 
Data Mining: Concepts and techniques: Chapter 13 trend
Data Mining: Concepts and techniques: Chapter 13 trendData Mining: Concepts and techniques: Chapter 13 trend
Data Mining: Concepts and techniques: Chapter 13 trend
 
Giới thiệu cơ bản về Big Data và các ứng dụng thực tiễn
Giới thiệu cơ bản về Big Data và các ứng dụng thực tiễnGiới thiệu cơ bản về Big Data và các ứng dụng thực tiễn
Giới thiệu cơ bản về Big Data và các ứng dụng thực tiễn
 
Thuật Toán BEA (Bond Energy Algorithm)
Thuật Toán BEA (Bond Energy Algorithm) Thuật Toán BEA (Bond Energy Algorithm)
Thuật Toán BEA (Bond Energy Algorithm)
 
Practice Test 1
Practice Test 1Practice Test 1
Practice Test 1
 
CÁC BÀI TOÁN KHAI PHÁ DỮ LIỆU VÀ ỨNG DỤNG CỦA KHAI PHÁ DỮ LIỆU.pdf
CÁC BÀI TOÁN KHAI PHÁ DỮ LIỆU VÀ ỨNG  DỤNG CỦA KHAI PHÁ DỮ LIỆU.pdfCÁC BÀI TOÁN KHAI PHÁ DỮ LIỆU VÀ ỨNG  DỤNG CỦA KHAI PHÁ DỮ LIỆU.pdf
CÁC BÀI TOÁN KHAI PHÁ DỮ LIỆU VÀ ỨNG DỤNG CỦA KHAI PHÁ DỮ LIỆU.pdf
 
Luận án: Phân lớp dữ liệu bằng cây quyết định mờ, HAY
Luận án: Phân lớp dữ liệu bằng cây quyết định mờ, HAYLuận án: Phân lớp dữ liệu bằng cây quyết định mờ, HAY
Luận án: Phân lớp dữ liệu bằng cây quyết định mờ, HAY
 
Workshop - Introduction to Machine Learning with R
Workshop - Introduction to Machine Learning with RWorkshop - Introduction to Machine Learning with R
Workshop - Introduction to Machine Learning with R
 

Viewers also liked

Wisconsin Fertility Institute: Injection Class 2011
Wisconsin Fertility Institute: Injection Class 2011Wisconsin Fertility Institute: Injection Class 2011
Wisconsin Fertility Institute: Injection Class 2011
WisFertility
 

Viewers also liked (20)

WEKA: Practical Machine Learning Tools And Techniques
WEKA: Practical Machine Learning Tools And TechniquesWEKA: Practical Machine Learning Tools And Techniques
WEKA: Practical Machine Learning Tools And Techniques
 
WEKA: Introduction To Weka
WEKA: Introduction To WekaWEKA: Introduction To Weka
WEKA: Introduction To Weka
 
R Statistics
R StatisticsR Statistics
R Statistics
 
LISP: Declarations In Lisp
LISP: Declarations In LispLISP: Declarations In Lisp
LISP: Declarations In Lisp
 
MS Sql Server: Doing Calculations With Functions
MS Sql Server: Doing Calculations With FunctionsMS Sql Server: Doing Calculations With Functions
MS Sql Server: Doing Calculations With Functions
 
Cinnamonhotel saigon 2013_01
Cinnamonhotel saigon 2013_01Cinnamonhotel saigon 2013_01
Cinnamonhotel saigon 2013_01
 
Test
TestTest
Test
 
Data Applied:Forecast
Data Applied:ForecastData Applied:Forecast
Data Applied:Forecast
 
Facebook: An Innovative Influenza Pandemic Early Warning System
Facebook: An Innovative Influenza Pandemic Early Warning SystemFacebook: An Innovative Influenza Pandemic Early Warning System
Facebook: An Innovative Influenza Pandemic Early Warning System
 
Control Statements in Matlab
Control Statements in  MatlabControl Statements in  Matlab
Control Statements in Matlab
 
Introduction to Data-Applied
Introduction to Data-AppliedIntroduction to Data-Applied
Introduction to Data-Applied
 
Matlab Text Files
Matlab Text FilesMatlab Text Files
Matlab Text Files
 
Mphone
MphoneMphone
Mphone
 
Clickthrough
ClickthroughClickthrough
Clickthrough
 
Kidical Mass Presentation
Kidical Mass PresentationKidical Mass Presentation
Kidical Mass Presentation
 
RapidMiner: Setting Up A Process
RapidMiner: Setting Up A ProcessRapidMiner: Setting Up A Process
RapidMiner: Setting Up A Process
 
Data Applied:Tree Maps
Data Applied:Tree MapsData Applied:Tree Maps
Data Applied:Tree Maps
 
Wisconsin Fertility Institute: Injection Class 2011
Wisconsin Fertility Institute: Injection Class 2011Wisconsin Fertility Institute: Injection Class 2011
Wisconsin Fertility Institute: Injection Class 2011
 
SPSS: File Managment
SPSS: File ManagmentSPSS: File Managment
SPSS: File Managment
 
LISP:Object System Lisp
LISP:Object System LispLISP:Object System Lisp
LISP:Object System Lisp
 

Similar to WEKA: Credibility Evaluating Whats Been Learned

MLlectureMethod.ppt
MLlectureMethod.pptMLlectureMethod.ppt
MLlectureMethod.ppt
butest
 
MLlectureMethod.ppt
MLlectureMethod.pptMLlectureMethod.ppt
MLlectureMethod.ppt
butest
 
Machine learning (5)
Machine learning (5)Machine learning (5)
Machine learning (5)
NYversity
 
MACHINE LEARNING PPT K MEANS CLUSTERING.
MACHINE LEARNING PPT K MEANS CLUSTERING.MACHINE LEARNING PPT K MEANS CLUSTERING.
MACHINE LEARNING PPT K MEANS CLUSTERING.
AmnaArooj13
 

Similar to WEKA: Credibility Evaluating Whats Been Learned (20)

Assessing Model Performance - Beginner's Guide
Assessing Model Performance - Beginner's GuideAssessing Model Performance - Beginner's Guide
Assessing Model Performance - Beginner's Guide
 
working with python
working with pythonworking with python
working with python
 
Machine learning session4(linear regression)
Machine learning   session4(linear regression)Machine learning   session4(linear regression)
Machine learning session4(linear regression)
 
MLlectureMethod.ppt
MLlectureMethod.pptMLlectureMethod.ppt
MLlectureMethod.ppt
 
MLlectureMethod.ppt
MLlectureMethod.pptMLlectureMethod.ppt
MLlectureMethod.ppt
 
Lecture 3.1_ Logistic Regression.pptx
Lecture 3.1_ Logistic Regression.pptxLecture 3.1_ Logistic Regression.pptx
Lecture 3.1_ Logistic Regression.pptx
 
evaluation and credibility-Part 1
evaluation and credibility-Part 1evaluation and credibility-Part 1
evaluation and credibility-Part 1
 
Understanding Blackbox Prediction via Influence Functions
Understanding Blackbox Prediction via Influence FunctionsUnderstanding Blackbox Prediction via Influence Functions
Understanding Blackbox Prediction via Influence Functions
 
Machine learning (5)
Machine learning (5)Machine learning (5)
Machine learning (5)
 
13ClassifierPerformance.pdf
13ClassifierPerformance.pdf13ClassifierPerformance.pdf
13ClassifierPerformance.pdf
 
PERFORMANCE_PREDICTION__PARAMETERS[1].pptx
PERFORMANCE_PREDICTION__PARAMETERS[1].pptxPERFORMANCE_PREDICTION__PARAMETERS[1].pptx
PERFORMANCE_PREDICTION__PARAMETERS[1].pptx
 
All PERFORMANCE PREDICTION PARAMETERS.pptx
All PERFORMANCE PREDICTION  PARAMETERS.pptxAll PERFORMANCE PREDICTION  PARAMETERS.pptx
All PERFORMANCE PREDICTION PARAMETERS.pptx
 
Py data19 final
Py data19   finalPy data19   final
Py data19 final
 
Supervised Learning.pdf
Supervised Learning.pdfSupervised Learning.pdf
Supervised Learning.pdf
 
Machine learning and_nlp
Machine learning and_nlpMachine learning and_nlp
Machine learning and_nlp
 
BOOTSTRAPPING TO EVALUATE RESPONSE MODELS: A SAS® MACRO
BOOTSTRAPPING TO EVALUATE RESPONSE MODELS: A SAS® MACROBOOTSTRAPPING TO EVALUATE RESPONSE MODELS: A SAS® MACRO
BOOTSTRAPPING TO EVALUATE RESPONSE MODELS: A SAS® MACRO
 
Essentials of machine learning algorithms
Essentials of machine learning algorithmsEssentials of machine learning algorithms
Essentials of machine learning algorithms
 
Predict Backorder on a supply chain data for an Organization
Predict Backorder on a supply chain data for an OrganizationPredict Backorder on a supply chain data for an Organization
Predict Backorder on a supply chain data for an Organization
 
MACHINE LEARNING PPT K MEANS CLUSTERING.
MACHINE LEARNING PPT K MEANS CLUSTERING.MACHINE LEARNING PPT K MEANS CLUSTERING.
MACHINE LEARNING PPT K MEANS CLUSTERING.
 
INTRODUCTION TO BOOSTING.ppt
INTRODUCTION TO BOOSTING.pptINTRODUCTION TO BOOSTING.ppt
INTRODUCTION TO BOOSTING.ppt
 

More from DataminingTools Inc

More from DataminingTools Inc (20)

Terminology Machine Learning
Terminology Machine LearningTerminology Machine Learning
Terminology Machine Learning
 
Techniques Machine Learning
Techniques Machine LearningTechniques Machine Learning
Techniques Machine Learning
 
Machine learning Introduction
Machine learning IntroductionMachine learning Introduction
Machine learning Introduction
 
Areas of machine leanring
Areas of machine leanringAreas of machine leanring
Areas of machine leanring
 
AI: Planning and AI
AI: Planning and AIAI: Planning and AI
AI: Planning and AI
 
AI: Logic in AI 2
AI: Logic in AI 2AI: Logic in AI 2
AI: Logic in AI 2
 
AI: Logic in AI
AI: Logic in AIAI: Logic in AI
AI: Logic in AI
 
AI: Learning in AI 2
AI: Learning in AI 2AI: Learning in AI 2
AI: Learning in AI 2
 
AI: Learning in AI
AI: Learning in AI AI: Learning in AI
AI: Learning in AI
 
AI: Introduction to artificial intelligence
AI: Introduction to artificial intelligenceAI: Introduction to artificial intelligence
AI: Introduction to artificial intelligence
 
AI: Belief Networks
AI: Belief NetworksAI: Belief Networks
AI: Belief Networks
 
AI: AI & Searching
AI: AI & SearchingAI: AI & Searching
AI: AI & Searching
 
AI: AI & Problem Solving
AI: AI & Problem SolvingAI: AI & Problem Solving
AI: AI & Problem Solving
 
Data Mining: Text and web mining
Data Mining: Text and web miningData Mining: Text and web mining
Data Mining: Text and web mining
 
Data Mining: Outlier analysis
Data Mining: Outlier analysisData Mining: Outlier analysis
Data Mining: Outlier analysis
 
Data Mining: Mining stream time series and sequence data
Data Mining: Mining stream time series and sequence dataData Mining: Mining stream time series and sequence data
Data Mining: Mining stream time series and sequence data
 
Data Mining: Mining ,associations, and correlations
Data Mining: Mining ,associations, and correlationsData Mining: Mining ,associations, and correlations
Data Mining: Mining ,associations, and correlations
 
Data Mining: Graph mining and social network analysis
Data Mining: Graph mining and social network analysisData Mining: Graph mining and social network analysis
Data Mining: Graph mining and social network analysis
 
Data warehouse and olap technology
Data warehouse and olap technologyData warehouse and olap technology
Data warehouse and olap technology
 
Data Mining: Data processing
Data Mining: Data processingData Mining: Data processing
Data Mining: Data processing
 

Recently uploaded

+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 

Recently uploaded (20)

Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
AI in Action: Real World Use Cases by Anitaraj
AI in Action: Real World Use Cases by AnitarajAI in Action: Real World Use Cases by Anitaraj
AI in Action: Real World Use Cases by Anitaraj
 
WSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering Developers
 
Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)
 
Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..
 
Six Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal OntologySix Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal Ontology
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Introduction to use of FHIR Documents in ABDM
Introduction to use of FHIR Documents in ABDMIntroduction to use of FHIR Documents in ABDM
Introduction to use of FHIR Documents in ABDM
 
CNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In PakistanCNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In Pakistan
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf
 
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfRising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 

WEKA: Credibility Evaluating Whats Been Learned

  • 2. Training and Testing We measure the success of a classification procedure by using error rates (or equivalent success rates) Measuring success rate using training set is highly optimistic The error rate on training set is called resubstitution error We have a separate test set for calculating success error Test set should be independent of the training set Also some time to improve our classification technique we use a validation set When we hold out some part of training set for testing (which is now not used for training), this process is called holdout procedure
  • 3. Predicting performance Expected success rate = 100 – error rate (If error rate is also in percentage) We want the true success rate Calculation of true success rate Suppose we have expected success rate(f) = s/n, where s is the number of success out of a total n instances For large value of n, f follows a normal distribution Now we will predict the true success rate (p) based on the confidence percentage we want For example say our f = 75%, then p will lie in [73.2%,76.7%] with 80% confidence
  • 4. Predicting performance Now using properties of statistics we know that the mean of f is p and the variance is p(1-p)/n To use normal distribution we will have to make the mean of f = 0 and standard deviation = 1 So suppose our confidence = c% and we want to calculate p We will use the two tailed property of normal distribution And also that the are covered by normal distribution is taken as 100% so the are we will leave is 100 - c
  • 5. Predicting performance Finally after all the manipulations we have ,true success rate as: Here, p -> true success rate f - > expected success rate N -> Number of instances Z -> Factor derived from a normal distribution table using the 100-c measure
  • 6. Cross validation We use cross validation when amount of data is small and we need to have independent training and test set from it It is important that each class is represented in its actual proportions in the training and test set: Stratification An important cross validation technique is stratified 10 fold cross validation, where the instance set is divided into 10 folds We have 10 iterations with taking a different single fold for testing and the rest 9 folds for training, averaging the error of the 10 iterations Problem: Computationally intensive
  • 7. Other estimates Leave-one-out:Steps One instance is left for testing and the rest are used for training This is iterated for all the instances and the errors are averaged Leave-one-out:Advantage We use larger training sets Leave-one-out:Disadvantage Computationally intensive Cannot be stratified
  • 8. Other estimates 0.632 Bootstrap Dataset of n samples is sampled n times, with replacements, to give another dataset with n instances There will be some repeated instances in the second set Here error is defined as: e = 0.632x(error in test instances) + 0.368x(error in training instances)
  • 9. Comparing data mining methods Till now we were dealing with performance prediction Now we will look at methods to compare algorithms, to see which one did better We cant directly use Error rate to predict which algorithm is better as the error rate might have been calculated on different data sets So to compare algorithms we need some statistical tests We use Student’s t- test to do this. This test help us to figure out if the mean error of two algorithm are different or not for a given confidence level
  • 10. Comparing data mining methods We will use paired t-test which is a slight modification of student’s t-test Paired t-test Suppose we have unlimited data, do the following: Find k data sets from the unlimited data we have Use cross validation with each technique to get the respective outcomes: x1, x2, x3,….,xk and y1,y2,y3,……,yk mx = mean of x values and similarly my di = xi – yi Using t-statistic:
  • 11. Comparing data mining methods Based on the value of k we get a degree of freedom, which enables us to figure out a z for a particular confidence value If t <= (-z) or t >= (z) then, the two means differ significantly In case t = 0 then they don’t differ, we call this null hypothesis
  • 12. Predicting Probabilities Till now we were considering a scheme which when applied, results in either a correct or an incorrect prediction. This is called 0 – loss function Now we will deal with the success incase of algorithms that outputs probability distribution for e.g. Naïve Bayes
  • 13. Predicting Probabilities Quadratic loss function: For a single instance there are k out comes or classes Probability vector: p1,p2,….,pk The actual out come vector is: a1,a2,a3,…..ak (where the actual outcome will be 1, rest all 0) We have to minimize the quadratic loss function given by: The minimum will be achieved when the probability vector is the true probability vector
  • 14. Predicting Probabilities Informational loss function: Given by: –log(pi) Minimum is again reached at true probabilities Differences between Quadratic loss and Informational loss While quadratic loss takes all probabilities under consideration, Informational loss is based only on the class probability While quadratic loss is bounded as its maximum output is 2, Informational loss is unbounded as it can output values up to infinity
  • 15. Counting the cost Different outcomes might have different cost For example in loan decision, the cost of lending to a defaulter is far greater that the lost-business cost of refusing a loan to a non defaulter Suppose we have two class prediction. Outcomes can be:
  • 16. Counting the cost True positive rate: TP/(TP+FN) False positive rate: FP/(FP+TN) Overall success rate: Number of correct classification / Total Number of classification Error rate = 1 – success rate In multiclass case we have a confusion matrix like (actual and a random one):
  • 17. Counting the cost These are the actual and the random outcome of a three class problem The diagonal represents the successful cases Kappa statistic = (D-observed - D-actual) / (D-perfect - D-actual) Here kappa statistic = (140 – 82)/(200-82) = 49.2% Kappa is used to measure the agreement between predicted and observed categorizations of a dataset, while correcting for agreements that occurs by chance Does not take cost into account
  • 18. Classification with costs Example Cost matrices (just gives us the number of errors): Success rate is measured by average cost per prediction We try to minimize the costs Expected costs: dot products of vectors of class probabilities and appropriate column in cost matrix
  • 19. Classification with costs Steps to take cost into consideration while testing: First use a learning method to get the probability vector (like Naïve Bayes) Now multiple the probability vector to each column of a cost matrix one by one so as to get the cost for each class/column Select the class with the minimum(or maximum!!) cost
  • 20. Cost sensitive learning Till now we included the cost factor during evaluation We will incorporate costs into the learning phase of a method We can change the ratio of instances in the training set so as to take care of costs For example we can do replication of a instances of particular class so that our learning method will give us a model with less errors of that class
  • 21. Lift Charts In practice, costs are rarely known In marketing terminology the response rate is referred to as the lift factor We compare probable scenarios to make decisions A lift chart allows visual comparison Example: promotional mail out to 1,000,000 households Mail to all: 0.1%response (1000) Some data mining tool identifies subset of 100, 000 of which 0.4% respond (400) A lift of 4
  • 22. Lift Charts Steps to calculate lift factor: We decide a sample size Now we arrange our data in decreasing order of the predicted probability of a class (the one which we will base our lift factor on: positive class) We calculate: Sample success proportion = Number of positive instances / Sample size Lift factor = Sample success proportion / Data success proportion We calculate lift factor for different sample size to get Lift Charts
  • 23. Lift Charts A hypothetical lift chart
  • 24. Lift Charts In the lift chart we will like to stay towards the upper left corner The diagonal line is the curve for random samples without using sorted data Any good selection will keep the lift curve above the diagonal
  • 25. ROC Curves Stands for receiver operating characteristic Difference to lift charts: Y axis showspercentage of true positive X axis shows percentage of false positives in samples ROC is a jagged curve It can be smoothened out by cross validation
  • 26. ROC Curves A ROC curve
  • 27. ROC Curves Ways to generate cost curves (Consider the previous diagram for reference) First way: Get the probability distribution over different folds of data Sort the data in decreasing order of the probability of yes class Select a point on X-axis and for that number of no, get the number of yes for each probability distribution Average the number of yes from all the folds and plot it
  • 28. ROC Curves Second way: Get the probability distribution over different folds of data Sort the data in decreasing order of the probability of yes class Select a point on X-axis and for that number of no, get the number of yes for each probability distribution Plot a ROC for each fold individually Average all the ROCs
  • 29. ROC Curves ROC curves for two schemes
  • 30. ROC Curves In the previous ROC curves: For a small, focused sample, use method A For a large one, use method B In between, choose between A and B with appropriate probabilities
  • 31. Recall – precision curves In case of a search query: Recall = number of documents retrieved that are relevant / total number of documents that are relevant Precision = number of documents retrieved that are relevant / total number of documents that are retrieved
  • 32. A summary Different measures used to evaluate the false positive versus the false negative tradeoff
  • 33. Cost curves Cost curves plot expected costs directly Example for case with uniform costs (i.e. error):
  • 34. Cost curves Example with costs:
  • 35. Cost curves C[+|-] is the cost of predicting + when the instance is – C[-|+] is the cost of predicting - when the instance is +
  • 36. Minimum Description Length Principle The description length is defined as: Space required to describe a theory + space required to describe the theory’s mistakes Theory = Classifier and mistakes = errors on the training data We try to minimize the description length MDL theory is the one that compresses the data the most. I.e to compress a data set we generate a model and then store the model and its mistakes We need to compute: Size of the model Space needed to encode the error
  • 37. Minimum Description Length Principle The 2nd one is easy. Just use informational loss function For 1st we need a method to encode the model L[T] = “length” of the theory L[E|T] = training set encoded wrt the theory
  • 38. Minimum Description Length Principle MDL and clustering Description length of theory: bits needed to encode the clusters. E.g. cluster centers Description length of data given theory: encode cluster membership and position relative to cluster. E.g. distance to cluster centers Works if coding scheme uses less code space for small numbers than for large ones
  • 39. Visit more self help tutorials Pick a tutorial of your choice and browse through it at your own pace. The tutorials section is free, self-guiding and will not involve any additional support. Visit us at www.dataminingtools.net