SlideShare una empresa de Scribd logo
1 de 32
Microsoft Decision Trees Algorithm
Overview Decision Trees Algorithm DMX Queries Data Mining usingDecision Trees Model Content for a Decision Trees Model Decision Tree Parameters Decision Tree Stored Procedures
Decision Trees Algorithm The Microsoft Decision Trees algorithm is a classification and regression algorithm provided by Microsoft SQL Server Analysis Services for use in predictive modeling of both discrete and continuous attributes. For discrete attributes, the algorithm makes predictions based on the relationships between input columns in a dataset. It uses the values, known as states, of those columns to predict the states of a column that you designate as predictable.  For example, in a scenario to predict which customers are likely to purchase a motor bike, if nine out of ten younger customers buy a motor bike, but only two out of ten older customers do so, the algorithm infers that age is a good predictor of the bike purchase.
Decision Trees Algorithm For continuous attributes, the algorithm uses linear regression to determine where a decision tree splits. If more than one column is set to predictable, or if the input data contains a nested table that is set to predictable, the algorithm builds a separate decision tree for each predictable column.
DMX Queries Lets understand how to use DMX queries by creating a simple tree model based on the School Plans data set. The table School Plans contains data about 500,000 high school students, including Parent Support, Parent Income, Sex, IQ, and whether or not the student plans to attend School.  using the Decision Trees algorithm, you can create a mining model, predicting the School Plans attribute based on the four other attributes.
DMX Queries(Classification) CREATE MINING STRUCTURE SchoolPlans (ID LONG KEY, Sex TEXT DISCRETE, ParentIncome LONG CONTINUOUS, IQ LONG CONTINUOUS, ParentSupport TEXT DISCRETE, SchoolPlans TEXT DISCRETE ) WITH HOLDOUT (10 PERCENT) ALTER MINING STRUCTURE SchoolPlans ADD MINING MODEL SchoolPlan ( ID, Sex, ParentIncome, IQ, ParentSupport, SchoolPlans PREDICT ) USING Microsoft Decision Trees Model Creation:
DMX Queries(Classification) INSERT INTO SchoolPlans      (ID, Sex, IQ, ParentSupport,        ParentIncome, SchoolPlans) OPENQUERY(SchoolPlans,      ‘SELECT ID, Sex, IQ, ParentSupport,           ParentIncome, SchoolPlans FROM SchoolPlans’) Training the SchoolPlan Model
DMX Queries(Classification) SELECT t.ID, SchoolPlans.SchoolPlans,         PredictProbability(SchoolPlans) AS [Probability] FROM SchoolPlans          PREDICTION JOIN      OPENQUERY(SchoolPlans,     ‘SELECT ID, Sex, IQ, ParentSupport, ParentIncome     FROM NewStudents’) AS t ON SchoolPlans.ParentIncome= t.ParentIncome AND SchoolPlans.IQ = t.IQ AND SchoolPlans.Sex= t.Sex AND SchoolPlans.ParentSupport= t.ParentSupport Predicting the SchoolPlan for a new student. This query returns ID, SchoolPlans, and Probability.
DMX Queries(Classification) SELECT t.ID,         PredictHistogram(SchoolPlans) AS [SchoolPlans]    FROM SchoolPlans            PREDICTION JOIN        OPENQUERY(SchoolPlans,            ‘SELECT ID, Sex, IQ, ParentSupport, ParentIncome       FROM NewStudents’) AS t         ON SchoolPlans.ParentIncome= t.ParentIncome AND SchoolPlans.IQ = t.IQ AND SchoolPlans.Sex= t.Sex AND SchoolPlans.ParentSupport= t.ParentSupportn Query returns the histogram of the SchoolPlans predictions in the form of a nested table. Result of this query is shown in the next slide.
DMX Queries(Classification)
DMX Queries (Regression) Regression means predicting continuous variables using linear regression formulas based on regressors that you specify.         ALTER MINING STRUCTURE SchoolPlans             ADD MINING MODEL ParentIncome               ( ID,               Gender,                ParentIncome PREDICT,               IQ REGRESSOR,              ParentEncouragement,             SchoolPlans              )         USING Microsoft Decision Trees             INSERT INTO ParentIncome Creating and training a regression model to Predict ParentIncome using IQ, Sex, ParentSupport, and SchoolPlans.  IQ is used as a regressor.
DMX Queries (Regression)  SELECT t.StudentID, ParentIncome.ParentIncome,        PredictStdev(ParentIncome) AS Deviation FROM ParentIncome         PREDICTION JOIN              OPENQUERY(SchoolPlans,            ‘SELECT ID, Sex, IQ, ParentSupport,             SchoolPlans FROM NewStudents’) AS t          ON ParentIncome.SchoolPlans = t. SchoolPlans AND           ParentIncome.IQ = t.IQ AND               ParentIncome.Sex = t.Sex AND             ParentIncome.ParentSupport = t. ParentSupport Continuous prediction using a decision tree to predict the ParentIncome for new students and the estimated standard deviation for each prediction.
DMX Queries(Association) CREATE MINING MODEL DanceAssociation         (         ID LONG KEY,         Gender TEXT DISCRETE,          MaritalStatus TEXT DISCRETE,            Shows TABLE PREDICT        (     Show TEXT KEY        )         )        USING Microsoft Decision Trees ,[object Object]
Each Show is      considered an attribute with binary states— existing or missing.
DMX Queries(Association)   INSERT INTO DanceAssociation                  ( ID, Gender, MaritalStatus,                   Shows (SKIP, Show))                      SHAPE                      {              OPENQUERY (DanceSurvey,            ‘SELECT ID, Gender, [Marital Status]              FROM Customers ORDER BY ID’)                 }            APPEND               (               {OPENQUERY (DanceSurvey,             ‘SELECT ID, Show          FROM Shows ORDER BY ID’)}             RELATE ID TO ID                 )AS Shows Training an associative trees model Because the model contains a nested table, the training statement involves      the Shape statement.
DMX Queries(Association) Training an associative trees model Suppose that there is a married male customer who likes the Michael Jackson’s Show. This query  returns the other five Shows this customer is most likely to find appealing. SELECT t.ID, Predict(DanceAssociation.Shows,5, $AdjustedProbability) AS Recommendation FROM DanceAssociation NATURAL PREDICTION JOIN (SELECT ‘101’ AS ID, ‘Male’ AS Gender, ‘Married’ AS MaritalStatus, (SELECT ‘Michael Jackson’ AS Show) AS Shows) AS t
Data Mining usingDecision Trees The most common data mining task for a decision tree is classification  i.e. determining whether or not a set of data belongs to a specific type, or class. The principal idea of a decision tree is to split your data recursively into subsets.  The process of evaluating all inputs is then repeated on each subset. When this recursive process is completed, a decision tree is formed.
Data Mining usingDecision Trees Decision trees offer several advantages over other data mining algorithms. Trees are quick to build and easy to interpret. Each node in the tree is clearly labeled in terms of the input attributes, and each path formed from the root to a leaf forms a rule about your target variable.  Prediction based on decision trees is efficient.
Model Content for a Decision Trees Model The top level is the model node. The children of the model node are its tree root nodes.  If a tree model contains a single tree, there is only one node in the second level.  The nodes of the other levels are either intermediate nodes (or leaf nodes) of the tree.  The probabilities of each predictable attribute state are stored in the distribution row sets.
Model Content for a Decision Trees Model
Interpreting  the Mining Model Content  A decision trees model has a single parent node that represents the model and its metadata underneath  which are independent trees that represent the predictable attributes that you select.  For example, if you set up your decision tree model to predict whether customers will purchase something, and provide inputs for gender and income, the model would create a single tree for the purchasing attribute, with many branches that divide on conditions related to gender and income. However, if you then add a separate predictable attribute for participation in a customer rewards program, the algorithm will create two separate trees under the parent node.  One tree contains the analysis for purchasing, and another tree contains the analysis for the customer rewards program.
Decision Tree Parameters The tree growth, tree shape, and the input output attribute settings are controlled using these parameters . You can fine-tune your model’s accuracy by adjusting these parameter settings.
Decision Tree Parameters ,[object Object],When the value of this parameter is set close to 0, there is a lower penalty for the tree growth, and you may see large trees. When its value is set close to 1, the tree growth is penalized heavily, and the resulting trees are relatively small. If there are fewer than 10 input attributes, the value is set to 0.5. if there are more than 100 attributes, the value is set to 0.99.  If you have between 10 and 100 input attributes, the value is set to 0.9.
Decision Tree Parameters ,[object Object],For example, if this value is set to 25, any split that would produce a child node containing less than 25 cases is not accepted.  The default value for MINIMUM_SUPPORT is 10. ,[object Object],The three possible values for SCORE METHOD are: SCORE METHOD = 1 use an entropy score for tree growth. SCORE METHOD = 2  use the Bayesian with K2 Prior method, meaning it will add a constant for each state of the predictable attribute in a tree node, regardless of the node level of the tree. SCORE METHOD = 3  use the Bayesian Dirichlet Equivalent with Uniform Prior (BDEU) method.
Decision Tree Parameters ,[object Object],SPLIT METHOD = 1 means the tree is split only in a binary way.  SPLIT METHOD = 2 indicates that the tree should always split completely on each attribute.  SPLIT METHOD = 3, the default method the decision tree will automatically choose the better of the previous two methods.  ,[object Object],When the number of input attributes is greater than this parameter value, feature selection is invoked implicitly to select the most significant input attributes.
Decision Tree Parameters ,[object Object],When the number of predictable attributes is greater than this parameter value, feature selection is invoked implicitly to select the most significant attributes.  ,[object Object],This parameter is typically used in price elasticity models.  For example, suppose that you have a model to predict Sales using Price and other       attributes. If you specify FORCE REGESSOR = Price, you get regression formulas using Price and other significant attributes for each node of the tree.
Decision Tree Stored Procedures Set of system-stored procedures used in the Decision Tree viewer are: ,[object Object]
CALL System.DTGetNodes(‘MovieAssociation’)
CALL System.DTGetNodeGraph(‘MovieAssociation’, 60)
CALL System.DTAddNodes(‘MovieAssociation’,‘36;34’,       ‘99;282;20;261;26;201;33;269;30;187’)
Decision Tree Stored Procedures GetTreeScores is the procedure that the Decision Tree viewer uses to populate the drop-down tree selector.  It takes a name of a decision tree model as a parameter and returns a table containing a row for every tree on the model and the following three columns: ATTRIBUTE_NAMEis the name of the tree. NODE_UNIQUE_NAME is the content node representing the root of the tree. MSOLAP_NODE_SCORE is a number representing the amount of information(number of nodes) in the tree.
Decision Tree Stored Procedures DTGetNodes is used by the decision tree Dependency Network viewer when you click the Add Nodes button.  It returns a row for all potential nodes in the dependency network and has the following two columns: NODE UNIQUE NAME1 is an identifier that is unique for the dependency network. NODE CAPTION is the name of the node.

Más contenido relacionado

La actualidad más candente

Decision tree lecture 3
Decision tree lecture 3Decision tree lecture 3
Decision tree lecture 3
Laila Fatehy
 

La actualidad más candente (20)

Decision tree lecture 3
Decision tree lecture 3Decision tree lecture 3
Decision tree lecture 3
 
Chapter 04-discriminant analysis
Chapter 04-discriminant analysisChapter 04-discriminant analysis
Chapter 04-discriminant analysis
 
Cs501 classification prediction
Cs501 classification predictionCs501 classification prediction
Cs501 classification prediction
 
Classification techniques in data mining
Classification techniques in data miningClassification techniques in data mining
Classification techniques in data mining
 
Unsupervised Learning Techniques to Diversifying and Pruning Random Forest
Unsupervised Learning Techniques to Diversifying and Pruning Random ForestUnsupervised Learning Techniques to Diversifying and Pruning Random Forest
Unsupervised Learning Techniques to Diversifying and Pruning Random Forest
 
Decision Tree and Bayesian Classification
Decision Tree and Bayesian ClassificationDecision Tree and Bayesian Classification
Decision Tree and Bayesian Classification
 
2.1 Data Mining-classification Basic concepts
2.1 Data Mining-classification Basic concepts2.1 Data Mining-classification Basic concepts
2.1 Data Mining-classification Basic concepts
 
Fuzzy Querying Based on Relational Database
Fuzzy Querying Based on Relational DatabaseFuzzy Querying Based on Relational Database
Fuzzy Querying Based on Relational Database
 
Random forest
Random forestRandom forest
Random forest
 
Bc0041
Bc0041Bc0041
Bc0041
 
Data1
Data1Data1
Data1
 
Lect 2 getting to know your data
Lect 2 getting to know your dataLect 2 getting to know your data
Lect 2 getting to know your data
 
Chapter01 introductory handbook
Chapter01 introductory handbookChapter01 introductory handbook
Chapter01 introductory handbook
 
Unit 3classification
Unit 3classificationUnit 3classification
Unit 3classification
 
Lect9 Decision tree
Lect9 Decision treeLect9 Decision tree
Lect9 Decision tree
 
2.8 accuracy and ensemble methods
2.8 accuracy and ensemble methods2.8 accuracy and ensemble methods
2.8 accuracy and ensemble methods
 
Classification in data mining
Classification in data mining Classification in data mining
Classification in data mining
 
08 classbasic
08 classbasic08 classbasic
08 classbasic
 
08 classbasic
08 classbasic08 classbasic
08 classbasic
 
08 classbasic
08 classbasic08 classbasic
08 classbasic
 

Similar a MS SQL SERVER: Decision trees algorithm

Machine Learning with WEKA
Machine Learning with WEKAMachine Learning with WEKA
Machine Learning with WEKA
butest
 
Cognitive Database: An Apache Spark-Based AI-Enabled Relational Database Syst...
Cognitive Database: An Apache Spark-Based AI-Enabled Relational Database Syst...Cognitive Database: An Apache Spark-Based AI-Enabled Relational Database Syst...
Cognitive Database: An Apache Spark-Based AI-Enabled Relational Database Syst...
Databricks
 
MS SQL SERVER: Microsoft naive bayes algorithm
MS SQL SERVER: Microsoft naive bayes algorithmMS SQL SERVER: Microsoft naive bayes algorithm
MS SQL SERVER: Microsoft naive bayes algorithm
sqlserver content
 
Random forest algorithm for regression a beginner's guide
Random forest algorithm for regression   a beginner's guideRandom forest algorithm for regression   a beginner's guide
Random forest algorithm for regression a beginner's guide
prateek kumar
 
Analyzing and Visualizing Data Chapter 6Data Represent.docx
Analyzing and Visualizing Data Chapter 6Data Represent.docxAnalyzing and Visualizing Data Chapter 6Data Represent.docx
Analyzing and Visualizing Data Chapter 6Data Represent.docx
durantheseldine
 
Bank loan purchase modeling
Bank loan purchase modelingBank loan purchase modeling
Bank loan purchase modeling
Saleesh Satheeshchandran
 

Similar a MS SQL SERVER: Decision trees algorithm (20)

DM Unit-III ppt.ppt
DM Unit-III ppt.pptDM Unit-III ppt.ppt
DM Unit-III ppt.ppt
 
Data mining
Data miningData mining
Data mining
 
Tree-Based Methods (Article 8 - Practical Exercises)
Tree-Based Methods (Article 8 - Practical Exercises)Tree-Based Methods (Article 8 - Practical Exercises)
Tree-Based Methods (Article 8 - Practical Exercises)
 
Machine Learning with WEKA
Machine Learning with WEKAMachine Learning with WEKA
Machine Learning with WEKA
 
Cognitive Database: An Apache Spark-Based AI-Enabled Relational Database Syst...
Cognitive Database: An Apache Spark-Based AI-Enabled Relational Database Syst...Cognitive Database: An Apache Spark-Based AI-Enabled Relational Database Syst...
Cognitive Database: An Apache Spark-Based AI-Enabled Relational Database Syst...
 
MS SQL SERVER: Microsoft naive bayes algorithm
MS SQL SERVER: Microsoft naive bayes algorithmMS SQL SERVER: Microsoft naive bayes algorithm
MS SQL SERVER: Microsoft naive bayes algorithm
 
R decision tree
R   decision treeR   decision tree
R decision tree
 
Random forest algorithm for regression a beginner's guide
Random forest algorithm for regression   a beginner's guideRandom forest algorithm for regression   a beginner's guide
Random forest algorithm for regression a beginner's guide
 
Analyzing and Visualizing Data Chapter 6Data Represent.docx
Analyzing and Visualizing Data Chapter 6Data Represent.docxAnalyzing and Visualizing Data Chapter 6Data Represent.docx
Analyzing and Visualizing Data Chapter 6Data Represent.docx
 
Mis End Term Exam Theory Concepts
Mis End Term Exam Theory ConceptsMis End Term Exam Theory Concepts
Mis End Term Exam Theory Concepts
 
Weka term paper(siddharth 10 bm60086)
Weka term paper(siddharth 10 bm60086)Weka term paper(siddharth 10 bm60086)
Weka term paper(siddharth 10 bm60086)
 
MS SQL SERVER: Microsoft sequence clustering and association rules
MS SQL SERVER: Microsoft sequence clustering and association rulesMS SQL SERVER: Microsoft sequence clustering and association rules
MS SQL SERVER: Microsoft sequence clustering and association rules
 
MS SQL SERVER: Microsoft sequence clustering and association rules
MS SQL SERVER: Microsoft sequence clustering and association rulesMS SQL SERVER: Microsoft sequence clustering and association rules
MS SQL SERVER: Microsoft sequence clustering and association rules
 
Machine Learning Classifiers
Machine Learning ClassifiersMachine Learning Classifiers
Machine Learning Classifiers
 
Dqs mds-matching 15042015
Dqs mds-matching 15042015Dqs mds-matching 15042015
Dqs mds-matching 15042015
 
Recommendation System
Recommendation SystemRecommendation System
Recommendation System
 
Ml9 introduction to-unsupervised_learning_and_clustering_methods
Ml9 introduction to-unsupervised_learning_and_clustering_methodsMl9 introduction to-unsupervised_learning_and_clustering_methods
Ml9 introduction to-unsupervised_learning_and_clustering_methods
 
Etl Overview (Extract, Transform, And Load)
Etl Overview (Extract, Transform, And Load)Etl Overview (Extract, Transform, And Load)
Etl Overview (Extract, Transform, And Load)
 
Bank loan purchase modeling
Bank loan purchase modelingBank loan purchase modeling
Bank loan purchase modeling
 
Data mining
Data miningData mining
Data mining
 

Más de sqlserver content

Más de sqlserver content (20)

MS SQL SERVER: Using the data mining tools
MS SQL SERVER: Using the data mining toolsMS SQL SERVER: Using the data mining tools
MS SQL SERVER: Using the data mining tools
 
MS SQL SERVER: SSIS and data mining
MS SQL SERVER: SSIS and data miningMS SQL SERVER: SSIS and data mining
MS SQL SERVER: SSIS and data mining
 
MS SQL SERVER: Programming sql server data mining
MS SQL SERVER:  Programming sql server data miningMS SQL SERVER:  Programming sql server data mining
MS SQL SERVER: Programming sql server data mining
 
MS SQL SERVER: Olap cubes and data mining
MS SQL SERVER:  Olap cubes and data miningMS SQL SERVER:  Olap cubes and data mining
MS SQL SERVER: Olap cubes and data mining
 
MS SQL SERVER: Microsoft time series algorithm
MS SQL SERVER: Microsoft time series algorithmMS SQL SERVER: Microsoft time series algorithm
MS SQL SERVER: Microsoft time series algorithm
 
MS SQL SERVER: Neural network and logistic regression
MS SQL SERVER: Neural network and logistic regressionMS SQL SERVER: Neural network and logistic regression
MS SQL SERVER: Neural network and logistic regression
 
MS SQL Server: Data mining concepts and dmx
MS SQL Server: Data mining concepts and dmxMS SQL Server: Data mining concepts and dmx
MS SQL Server: Data mining concepts and dmx
 
MS Sql Server: Reporting models
MS Sql Server: Reporting modelsMS Sql Server: Reporting models
MS Sql Server: Reporting models
 
MS Sql Server: Reporting manipulating data
MS Sql Server: Reporting manipulating dataMS Sql Server: Reporting manipulating data
MS Sql Server: Reporting manipulating data
 
MS Sql Server: Reporting introduction
MS Sql Server: Reporting introductionMS Sql Server: Reporting introduction
MS Sql Server: Reporting introduction
 
MS Sql Server: Reporting basics
MS Sql  Server: Reporting basicsMS Sql  Server: Reporting basics
MS Sql Server: Reporting basics
 
MS Sql Server: Datamining Introduction
MS Sql Server: Datamining IntroductionMS Sql Server: Datamining Introduction
MS Sql Server: Datamining Introduction
 
MS Sql Server: Business Intelligence
MS Sql Server: Business IntelligenceMS Sql Server: Business Intelligence
MS Sql Server: Business Intelligence
 
MS SQLSERVER:Feeding Data Into Database
MS SQLSERVER:Feeding Data Into DatabaseMS SQLSERVER:Feeding Data Into Database
MS SQLSERVER:Feeding Data Into Database
 
MS SQLSERVER:Doing Calculations With Functions
MS SQLSERVER:Doing Calculations With FunctionsMS SQLSERVER:Doing Calculations With Functions
MS SQLSERVER:Doing Calculations With Functions
 
MS SQLSERVER:Deleting A Database
MS SQLSERVER:Deleting A DatabaseMS SQLSERVER:Deleting A Database
MS SQLSERVER:Deleting A Database
 
MS SQLSERVER:Customizing Your D Base Design
MS SQLSERVER:Customizing Your D Base DesignMS SQLSERVER:Customizing Your D Base Design
MS SQLSERVER:Customizing Your D Base Design
 
MS SQLSERVER:Creating Views
MS SQLSERVER:Creating ViewsMS SQLSERVER:Creating Views
MS SQLSERVER:Creating Views
 
MS SQLSERVER:Creating A Database
MS SQLSERVER:Creating A DatabaseMS SQLSERVER:Creating A Database
MS SQLSERVER:Creating A Database
 
MS SQLSERVER:Advanced Query Concepts Copy
MS SQLSERVER:Advanced Query Concepts   CopyMS SQLSERVER:Advanced Query Concepts   Copy
MS SQLSERVER:Advanced Query Concepts Copy
 

Último

Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 

Último (20)

A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Top 10 Most Downloaded Games on Play Store in 2024
Top 10 Most Downloaded Games on Play Store in 2024Top 10 Most Downloaded Games on Play Store in 2024
Top 10 Most Downloaded Games on Play Store in 2024
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsTop 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
 

MS SQL SERVER: Decision trees algorithm

  • 2. Overview Decision Trees Algorithm DMX Queries Data Mining usingDecision Trees Model Content for a Decision Trees Model Decision Tree Parameters Decision Tree Stored Procedures
  • 3. Decision Trees Algorithm The Microsoft Decision Trees algorithm is a classification and regression algorithm provided by Microsoft SQL Server Analysis Services for use in predictive modeling of both discrete and continuous attributes. For discrete attributes, the algorithm makes predictions based on the relationships between input columns in a dataset. It uses the values, known as states, of those columns to predict the states of a column that you designate as predictable. For example, in a scenario to predict which customers are likely to purchase a motor bike, if nine out of ten younger customers buy a motor bike, but only two out of ten older customers do so, the algorithm infers that age is a good predictor of the bike purchase.
  • 4. Decision Trees Algorithm For continuous attributes, the algorithm uses linear regression to determine where a decision tree splits. If more than one column is set to predictable, or if the input data contains a nested table that is set to predictable, the algorithm builds a separate decision tree for each predictable column.
  • 5. DMX Queries Lets understand how to use DMX queries by creating a simple tree model based on the School Plans data set. The table School Plans contains data about 500,000 high school students, including Parent Support, Parent Income, Sex, IQ, and whether or not the student plans to attend School. using the Decision Trees algorithm, you can create a mining model, predicting the School Plans attribute based on the four other attributes.
  • 6. DMX Queries(Classification) CREATE MINING STRUCTURE SchoolPlans (ID LONG KEY, Sex TEXT DISCRETE, ParentIncome LONG CONTINUOUS, IQ LONG CONTINUOUS, ParentSupport TEXT DISCRETE, SchoolPlans TEXT DISCRETE ) WITH HOLDOUT (10 PERCENT) ALTER MINING STRUCTURE SchoolPlans ADD MINING MODEL SchoolPlan ( ID, Sex, ParentIncome, IQ, ParentSupport, SchoolPlans PREDICT ) USING Microsoft Decision Trees Model Creation:
  • 7. DMX Queries(Classification) INSERT INTO SchoolPlans (ID, Sex, IQ, ParentSupport, ParentIncome, SchoolPlans) OPENQUERY(SchoolPlans, ‘SELECT ID, Sex, IQ, ParentSupport, ParentIncome, SchoolPlans FROM SchoolPlans’) Training the SchoolPlan Model
  • 8. DMX Queries(Classification) SELECT t.ID, SchoolPlans.SchoolPlans, PredictProbability(SchoolPlans) AS [Probability] FROM SchoolPlans PREDICTION JOIN OPENQUERY(SchoolPlans, ‘SELECT ID, Sex, IQ, ParentSupport, ParentIncome FROM NewStudents’) AS t ON SchoolPlans.ParentIncome= t.ParentIncome AND SchoolPlans.IQ = t.IQ AND SchoolPlans.Sex= t.Sex AND SchoolPlans.ParentSupport= t.ParentSupport Predicting the SchoolPlan for a new student. This query returns ID, SchoolPlans, and Probability.
  • 9. DMX Queries(Classification) SELECT t.ID, PredictHistogram(SchoolPlans) AS [SchoolPlans] FROM SchoolPlans PREDICTION JOIN OPENQUERY(SchoolPlans, ‘SELECT ID, Sex, IQ, ParentSupport, ParentIncome FROM NewStudents’) AS t ON SchoolPlans.ParentIncome= t.ParentIncome AND SchoolPlans.IQ = t.IQ AND SchoolPlans.Sex= t.Sex AND SchoolPlans.ParentSupport= t.ParentSupportn Query returns the histogram of the SchoolPlans predictions in the form of a nested table. Result of this query is shown in the next slide.
  • 11. DMX Queries (Regression) Regression means predicting continuous variables using linear regression formulas based on regressors that you specify. ALTER MINING STRUCTURE SchoolPlans ADD MINING MODEL ParentIncome ( ID, Gender, ParentIncome PREDICT, IQ REGRESSOR, ParentEncouragement, SchoolPlans ) USING Microsoft Decision Trees INSERT INTO ParentIncome Creating and training a regression model to Predict ParentIncome using IQ, Sex, ParentSupport, and SchoolPlans. IQ is used as a regressor.
  • 12. DMX Queries (Regression) SELECT t.StudentID, ParentIncome.ParentIncome, PredictStdev(ParentIncome) AS Deviation FROM ParentIncome PREDICTION JOIN OPENQUERY(SchoolPlans, ‘SELECT ID, Sex, IQ, ParentSupport, SchoolPlans FROM NewStudents’) AS t ON ParentIncome.SchoolPlans = t. SchoolPlans AND ParentIncome.IQ = t.IQ AND ParentIncome.Sex = t.Sex AND ParentIncome.ParentSupport = t. ParentSupport Continuous prediction using a decision tree to predict the ParentIncome for new students and the estimated standard deviation for each prediction.
  • 13.
  • 14. Each Show is considered an attribute with binary states— existing or missing.
  • 15. DMX Queries(Association) INSERT INTO DanceAssociation ( ID, Gender, MaritalStatus, Shows (SKIP, Show)) SHAPE { OPENQUERY (DanceSurvey, ‘SELECT ID, Gender, [Marital Status] FROM Customers ORDER BY ID’) } APPEND ( {OPENQUERY (DanceSurvey, ‘SELECT ID, Show FROM Shows ORDER BY ID’)} RELATE ID TO ID )AS Shows Training an associative trees model Because the model contains a nested table, the training statement involves the Shape statement.
  • 16. DMX Queries(Association) Training an associative trees model Suppose that there is a married male customer who likes the Michael Jackson’s Show. This query returns the other five Shows this customer is most likely to find appealing. SELECT t.ID, Predict(DanceAssociation.Shows,5, $AdjustedProbability) AS Recommendation FROM DanceAssociation NATURAL PREDICTION JOIN (SELECT ‘101’ AS ID, ‘Male’ AS Gender, ‘Married’ AS MaritalStatus, (SELECT ‘Michael Jackson’ AS Show) AS Shows) AS t
  • 17. Data Mining usingDecision Trees The most common data mining task for a decision tree is classification i.e. determining whether or not a set of data belongs to a specific type, or class. The principal idea of a decision tree is to split your data recursively into subsets. The process of evaluating all inputs is then repeated on each subset. When this recursive process is completed, a decision tree is formed.
  • 18. Data Mining usingDecision Trees Decision trees offer several advantages over other data mining algorithms. Trees are quick to build and easy to interpret. Each node in the tree is clearly labeled in terms of the input attributes, and each path formed from the root to a leaf forms a rule about your target variable. Prediction based on decision trees is efficient.
  • 19. Model Content for a Decision Trees Model The top level is the model node. The children of the model node are its tree root nodes. If a tree model contains a single tree, there is only one node in the second level. The nodes of the other levels are either intermediate nodes (or leaf nodes) of the tree. The probabilities of each predictable attribute state are stored in the distribution row sets.
  • 20. Model Content for a Decision Trees Model
  • 21. Interpreting the Mining Model Content A decision trees model has a single parent node that represents the model and its metadata underneath which are independent trees that represent the predictable attributes that you select. For example, if you set up your decision tree model to predict whether customers will purchase something, and provide inputs for gender and income, the model would create a single tree for the purchasing attribute, with many branches that divide on conditions related to gender and income. However, if you then add a separate predictable attribute for participation in a customer rewards program, the algorithm will create two separate trees under the parent node. One tree contains the analysis for purchasing, and another tree contains the analysis for the customer rewards program.
  • 22. Decision Tree Parameters The tree growth, tree shape, and the input output attribute settings are controlled using these parameters . You can fine-tune your model’s accuracy by adjusting these parameter settings.
  • 23.
  • 24.
  • 25.
  • 26.
  • 27.
  • 30. CALL System.DTAddNodes(‘MovieAssociation’,‘36;34’, ‘99;282;20;261;26;201;33;269;30;187’)
  • 31. Decision Tree Stored Procedures GetTreeScores is the procedure that the Decision Tree viewer uses to populate the drop-down tree selector. It takes a name of a decision tree model as a parameter and returns a table containing a row for every tree on the model and the following three columns: ATTRIBUTE_NAMEis the name of the tree. NODE_UNIQUE_NAME is the content node representing the root of the tree. MSOLAP_NODE_SCORE is a number representing the amount of information(number of nodes) in the tree.
  • 32. Decision Tree Stored Procedures DTGetNodes is used by the decision tree Dependency Network viewer when you click the Add Nodes button. It returns a row for all potential nodes in the dependency network and has the following two columns: NODE UNIQUE NAME1 is an identifier that is unique for the dependency network. NODE CAPTION is the name of the node.
  • 33. Decision Tree Stored Procedures The DTGetNodeGraph procedure returns four columns: When a row has NODE TYPE = 1, it contains a description of the nodes and the remaining three columns have the following interpretation: NODE UNIQUE NAME1 contains a unique identifier for the node. NODE UNIQUE NAME2 contains the node caption. When a row has NODE TYPE = 2, it represents a directed edge in the graph and the remaining columns have these interpretations: NODE UNIQUE NAME1 contains the node name of the starting point of the edge. NODE UNIQUE NAME2 contains the node name of the ending point of the edge. MSOLAP NODE SCORE contains the relative weight of the edge.
  • 34. Decision Tree Stored Procedures DTAddNodesallows you to add new nodes to an existing graph. It takes a model name, a semicolon-separated list of the IDs of nodes you want to add to the graph, and a semicolon-separated list of the IDs of nodes already in the graph. This procedure returns a table similar to the NODE TYPE = 2 section of DTGetNodeGraph, but without the NODE TYPE column. The rows in the result set contain all the edges between the added nodes, and all of the edges between the added nodes and the nodes specified as already in the graph.
  • 35. Summary Decision Trees Algorithm Overview DMX Queries Data Mining usingDecision Trees Interpreting the Model Content for a Decision Trees Model Decision Tree Parameters Decision Tree Stored Procedures
  • 36. Visit more self help tutorials Pick a tutorial of your choice and browse through it at your own pace. The tutorials section is free, self-guiding and will not involve any additional support. Visit us at www.dataminingtools.net