SlideShare una empresa de Scribd logo
1 de 14
ITB TERM PAPER
                      Classification and Clutering




Nitin Kumar Rathore

        10BM60055
Introduction
Weka stands for Waikato Environment for knowledge analysis. Weka is software available
for free used for machine learning. It is coded in Java and is developed by the University of
Waikato, New Zealand. Weka workbench includes set of visualization tools and algorithms
which is applied for better decision making through data analysis and predictive modeling. It
also has a GUI (graphical user interface) for ease of use. It is developed in Java so is portable
across platforms Weka has many applications and is used widely for research and educational
purposes. Data mining functions can be done by weka involves classification, clustering,
feature selection, data preprocessing, regression and visualization.

Weka startup screen looks like:




This is Weka GUI Chooser. It gives you four interfaces to work on

       Explorer: It is used for exploring the data with weka by providing access to all the
       facilities by the use of menues and forms
       Experimenter: Weka Experimenter allows you to create, analyse, modify and run
       large scale experiments. It can be used to answer question such as out of many
       schemes which is better (if there is)
       Knowledge flow: it has the same function as that of explorer. It supports incremental
       learning. It handles data on incremental basis. It uses incremental algorithms to
       process data.
       Simple CLI: CLI stands for command line interface. It just provides all the
       functionality through command line interface.

Data Ming Techniques

Out of the data mining techniques provided by the weka, classification, clustering, feature
selection, data preprocessing, regression and visualization, this paper will demonstrate use of
classification and clustering.
Classification
Classification creates a model based on which a new instance can be classified into the
existing classes or determined classes. for example by creating a decision tree based on past
sales we can determine how likely is a person to buy the product given all his attribute like
disposable income, family strength, state/country etc.

To start with classification you must use or create arff or csv (or any supported) file format.
An arff file is a table. To create arff file from excel you just have to follow these steps

       Open the excel file. Remove headings
       Save as it as a csv file (comma delimited) file.
       Open the csv file ina text editor.
       Now write the relation name at the top of the file as: @relation <relation_name>
       The text inside the arrows, < and >, represents the text to be entered according to the
       requirement
       Leave a blank line and enter all the attributes, column heads, in the format: @attibute
       <attribute_name>(<attribute_values>). For example @attribute outlook (sunny,
       overcast, rainy)
       After entering all the attribute leave a blank line and write: @data
       This last line will appear just above comma separated data values of the file.
       Save it as <file_name>.arff
       The sample picture of arff file is shown below
Classification example:
Our goal is to create a decision tree using weka so that we can classify new or unknown iris
flower samples.

There are three kind or iris they are Iris setosa, Iris versicolor, Iris virginica.

Data file: We have a data file containing attribute values for 150 Iris samples in arff format at
this link: http://code.google.com/p/pwr-apw/downloads/detail?name=iris.arff.

Concept behind the classification is the sepal and petal length and width help us to identify
the unknown iris. The data files contain all the four attributes. The algorithm we are going to
use to classify is weka’s J4.8 decision tree learner.

Follow the underlying steps to classify:

        Open weka and choose explorer. Then open the downloaded arff file.
        Go to classify tab.
        Click “choose” and Choose J48 algorithm under trees section




        Left click on the chosen J48 algorithm to open Weka Generic Object Editor.
        Change the option saveInstanceData to true. Click ok. It allows you to find the
        classification process for each sample after building of the decision tree
Click “Percentage Split” option in the “Test Options” section. It trains on the
numerical percentage enters in the box and test on the rest of the data. Default value is
66%
Click on “Start” to start classification. The output box named “classifier output”
shows the output of classification. Output will look like this




Now we will see the tree. Right click on the entry in “Result List” then click visualize
tree.
Decision tree will be visible in new window




It gives the decision structure or flow of process to be followed during classification. For
example if petal width is > 0.6, petal width <=1.7, petal length > 4.9 and petal width <= 1.5,it
implies the iris is Virginica.

Now look at the classifier output box. The rules describing the decision tree is described as
given in the picture.
As we can see in the decision tree we don’t require sepal length and width for classification.
We require only petal length and width.

       Go to “classifier output box”. Scroll to the section “Evaluation on test split section”.
       We have split the data in two 66% for training and 33% for testing the model or tree.
       This section will be visible as follows




       Weka took 51 samples as 33% for test. Out of which 49 are classified correctly and 2
       are classified incorrectly.
       If you look at the confusion matrix below in classifier output box. You will see all
       setosa(15) and all versicolor(19) are classified correctly but 2 out 0f 117 virginica are
       classified as versicolor.
       To find more information or to visualize how decision tree did on test samples. Right
       click on “Result list” and select “Visualize classifier errors”.
       A new window will open. Now as our tree has used on petal width and petal length to
       classify, we will select Petal Length for X axis and Petal Width for Y axis.
       Here “x” or cross represents correctly classified samples and squares represents
       incorrectly classified samples.
       Results of decision tree as Setosa, versicolor and virginica are represented in different
       colors as blue red and green.
       AS we can see why these are classified incorrectly as virginica, because they fall into
       the versicolor group considering petal length and width.
The picture of window will appear as




By left clicking on the squared instances circled black will give you information about
that instance.
As we can see 2 nodes out of 50 virginica samples (train +test) are classified incorrectly. Rest
others are classified correctly for setosa and versicolor. There can be many reasons for it.
Few are mentioned below.

       Attribute measurement error: It arises out of incorrect measurement of petal and sepal
       length and widths.
       Sample class identification error: It may arise because some classes are identified
       incorrectly. Say some versicolor are classified as virginica.
       Outlier samples: some infected or abnormal flower are sampled
       Inappropriate classification algorithm: the algorithm we chose is not suitable for the
       classification.


Clustering
Clustering is formation of groups on the basis of its attributes and is used to determine
patterns from the data. Advantage of clustering over classification is each and every attribute
is used to define a group but disadvantage of clustering is a user must know beforehand how
many groups he wants to form.

There are 2 types of clustering:

Hierarchical clustering: This approach uses measure (generally squared Euclidean) of
distance for identifying distance between objects to form a cluster. Process starts with all the
objects as separate clusters. Then on the basis of shortest distance between clusters two
objects closest are joined to form a cluster. And this cluster represents the new object. Now
again the process continues until one cluster is formed or the required number of cluster is
reached.

Non-Hierarchical Clustering: It is the method of clustering in which partition of observations
(say n in number) occur into k clusters. Each observation is assigned to nearest cluster and
cluster mean is recalculated. In this paper we will study K-means clustering example.

Applications of clustering includes

       Market segmentation
       Computer vision
       Geostatistics
       Understanding buyer behavior

Data file: Data file talks about the BMW dealership. It contains data about how often
customer makes a purchase, what cars they look at, how they walk through showroom and
dealership. It contains 100 rows of data and where every attribute/column represent the steps
that customer have achieved in their buying process. “1” represents they have reached this
step whereas “0” represents they didn’t made it to this step. Download the data file from the
link:http://www.ibm.com/developerworks/apps/download/index.jsp?contentid=487584&filen
ame=os-weka2-Examples.zip&method=http&locale=

Let us have a sneak peek into the data file.




Now follow these steps to perform Clustering:

       Load the file into Weka by open file option under preprocess tab of weka explorer or
       by double clicking the file.
       The Weka explorer will look like
To create clusters click on cluster tab. Click the command button “Choose” and select
       “SimpleKMeans”.
       Click on the text box next to choose button which displays the k means algorithm. It
       will open Weka GUI Generic Object Editor
       Change “numClusters” from 2 to 5. It define the number of clusters to be formed.
       Click ok
       Click Start to start clustering
       In “result List” box a entry will appear and Cluster output will display output of
       clustering. It will appear as follows.




Cluster Results:

Now we will have the clusters defined. You can have cluster data in a separate window by
right clicking the entry in the “Result List” Box. There are 5 clusters formed named from “0”
to “4“. If a attribute value for a cluster is “1” it means all the instances in the cluster have the
same value “1” for that attribute. If a cluster has “0” values for an attribute it means all
instances in the cluster have the “0” value for that attribute. To remind, the “0” value
represent customer have not entered into that step of buying process and “1” represent
customer have entered into the step.
Clustered instances show how many instances belong to each cluster. Clustered instances is
the heading given in the cluster output. For example in cluster “0” it have 26 instances or
26% instances (as there are 100 rows no. of instances is equal to percentage)



The value for clusters in separate window is given in the picture below.




Interpreting the clusters

       Cluster 0: It represents the group of non purchasers, as they may look for dealership,
       look for cars in a showroom but when it comes to purchasing a car they do nothing.
       This group just adds to cost but doesn’t bring any revenue.
       Cluster 1: This group is attracted towards M5 as it is quite visible that go straight
       towards the M5s ignoring 3Series car and paying no heed at all to Z4. They even
don’t do the computer search. But as we can see this high footfall for does not bring
sales accordingly. The reason for medium sales should be unearthed. Say if customer
service is the problem we should increase the service quality over the M5 section by
training sales executive better or if lack of no. of sales personnel to cater every
customer is the problem we can provide more staff for the M5 section.
Cluster 2: This group just contains 5 instances out of 100. They can be called
“insignificant group”. They are not statistically important. We should not make any
conclusion from such an insignificant group. It indicates we may reduce the no. of
clusters
Cluster 3: This is the group of customers we can call “sure shot buyers”. Because
they will always buy a car. One thing to note is we should take care of their financing
as they always go for financing. They lookout showroom for available cars and also
do computer search for the available dealership. They generally don’t lookout for
3Series. It displays that we should make computer search for M5 and Z4 more visible
and attractive in search results.
Cluster 4: This group contains the ones that make least purchase after non-
purchasers. They are the new ones in the category, because they don’t look for
expensive cars like M5 instead lookout for 3Series. They walk into showrooms and
they don’t involve in computer search. As we can see 50 percent of them get to the
financing stage but only 32 percent end up buying a car. This means these are the
ones buying their first BMW and know exactly their requirement and hence their car
(3Series entry level model). They generally go for financing to afford the car. This
means to increase the sales we should increase the conversion ratio from financing
stage to purchasing stage. We should identify the problem there and take the
appropriate step. For example making financing easier by collaborating with bank. By
lowering the terms that repels customers.
REFERENCES

[1] Data mining by Jan H. witten, Eibe Frank and Mark A. Hall, 3rd edition, Morgan
Kaufman Publisher

[2]Tutorial for weka provided by university of Waikato, www.cs.waikato.ac.nz/ml/weka/

[3] Weka,Classification using decision trees based on Dr. Polczynski's Lecture, written by
Prof. Andrzej Kochanski and Prof Marcin Perzyk, Faculty of Production Engineering,
Warsaw            University         of          Technology,          Warsaw       Poland,
http://referensi.dosen.narotama.ac.id/files/2011/12/weka-tutorial-2.pdf

[4] Classification via Decision Trees in WEKA, Computer science, Telecommunications, and
Information                    systems,                  DePaul              University,
http://maya.cs.depaul.edu/classes/ect584/weka/classify.html

[5] Data mining with WEKA, Part 2: classification and clustering, IBM developer works
Michael       Abernethy,    http://www.ibm.com/developerworks/opensource/library/os-
weka2/index.html?ca=drs-

Más contenido relacionado

La actualidad más candente

How cloud computing work
How cloud computing workHow cloud computing work
How cloud computing work
icloud9
 
Evolution of the cloud
Evolution of the cloudEvolution of the cloud
Evolution of the cloud
sagaroceanic11
 
Software engineering: design for reuse
Software engineering: design for reuseSoftware engineering: design for reuse
Software engineering: design for reuse
Marco Brambilla
 

La actualidad más candente (20)

Design and Implementation of Student Profile and Placement management system
Design and Implementation of Student Profile and Placement management systemDesign and Implementation of Student Profile and Placement management system
Design and Implementation of Student Profile and Placement management system
 
Online News Portal System
Online News Portal SystemOnline News Portal System
Online News Portal System
 
How cloud computing work
How cloud computing workHow cloud computing work
How cloud computing work
 
Aneka
AnekaAneka
Aneka
 
online news portal system
online news portal systemonline news portal system
online news portal system
 
Software requirement specification(SRS)
Software requirement specification(SRS)Software requirement specification(SRS)
Software requirement specification(SRS)
 
Class diagrams
Class diagramsClass diagrams
Class diagrams
 
Cs6703 grid and cloud computing unit 4
Cs6703 grid and cloud computing unit 4Cs6703 grid and cloud computing unit 4
Cs6703 grid and cloud computing unit 4
 
File tracking system
File tracking systemFile tracking system
File tracking system
 
Storage As A Service (StAAS)
Storage As A Service (StAAS)Storage As A Service (StAAS)
Storage As A Service (StAAS)
 
Cloud computing and Software as a Service Overview
Cloud computing and Software as a Service OverviewCloud computing and Software as a Service Overview
Cloud computing and Software as a Service Overview
 
Final project(news portal system).docx
Final project(news portal system).docxFinal project(news portal system).docx
Final project(news portal system).docx
 
Cloud analytics
Cloud analyticsCloud analytics
Cloud analytics
 
Object oriented modeling and design
Object oriented modeling and designObject oriented modeling and design
Object oriented modeling and design
 
Cloud Computing(Introduction)
Cloud Computing(Introduction)Cloud Computing(Introduction)
Cloud Computing(Introduction)
 
60780174 49594067-cs1403-case-tools-lab-manual
60780174 49594067-cs1403-case-tools-lab-manual60780174 49594067-cs1403-case-tools-lab-manual
60780174 49594067-cs1403-case-tools-lab-manual
 
Evolution of the cloud
Evolution of the cloudEvolution of the cloud
Evolution of the cloud
 
Cloud computing Module 2 First Part
Cloud computing Module 2 First PartCloud computing Module 2 First Part
Cloud computing Module 2 First Part
 
Software engineering: design for reuse
Software engineering: design for reuseSoftware engineering: design for reuse
Software engineering: design for reuse
 
Infrastructure as a Service ( IaaS)
Infrastructure as a Service ( IaaS)Infrastructure as a Service ( IaaS)
Infrastructure as a Service ( IaaS)
 

Destacado

Weka presentation
Weka presentationWeka presentation
Weka presentation
Saeed Iqbal
 
WEKA Tutorial
WEKA TutorialWEKA Tutorial
WEKA Tutorial
butest
 
Aula pratica k-means-rp2009
Aula pratica k-means-rp2009Aula pratica k-means-rp2009
Aula pratica k-means-rp2009
Marcelo Silva
 
Instructivo para inscribirse en el PROCADO
Instructivo para inscribirse en el PROCADOInstructivo para inscribirse en el PROCADO
Instructivo para inscribirse en el PROCADO
UNLa
 
Weka a tool_for_exploratory_data_mining
Weka a tool_for_exploratory_data_miningWeka a tool_for_exploratory_data_mining
Weka a tool_for_exploratory_data_mining
Tony Frame
 
Sesión mat resolvemos problemas de equilibrio copia
Sesión mat resolvemos problemas de equilibrio   copiaSesión mat resolvemos problemas de equilibrio   copia
Sesión mat resolvemos problemas de equilibrio copia
SOTO ZOTITO
 
Data Mining with WEKA WEKA
Data Mining with WEKA WEKAData Mining with WEKA WEKA
Data Mining with WEKA WEKA
butest
 
Weka_Manual_Sagar
Weka_Manual_SagarWeka_Manual_Sagar
Weka_Manual_Sagar
Sagar Kumar
 

Destacado (20)

Data Mining using Weka
Data Mining using WekaData Mining using Weka
Data Mining using Weka
 
An Introduction To Weka
An Introduction To WekaAn Introduction To Weka
An Introduction To Weka
 
Weka presentation
Weka presentationWeka presentation
Weka presentation
 
WEKA Tutorial
WEKA TutorialWEKA Tutorial
WEKA Tutorial
 
Classification and Clustering Analysis using Weka
Classification and Clustering Analysis using Weka Classification and Clustering Analysis using Weka
Classification and Clustering Analysis using Weka
 
Weka project - Classification & Association Rule Generation
Weka project - Classification & Association Rule GenerationWeka project - Classification & Association Rule Generation
Weka project - Classification & Association Rule Generation
 
WEKA: Data Mining Input Concepts Instances And Attributes
WEKA: Data Mining Input Concepts Instances And AttributesWEKA: Data Mining Input Concepts Instances And Attributes
WEKA: Data Mining Input Concepts Instances And Attributes
 
Data mining tools (R , WEKA, RAPID MINER, ORANGE)
Data mining tools (R , WEKA, RAPID MINER, ORANGE)Data mining tools (R , WEKA, RAPID MINER, ORANGE)
Data mining tools (R , WEKA, RAPID MINER, ORANGE)
 
WEKA: The Knowledge Flow Interface
WEKA: The Knowledge Flow InterfaceWEKA: The Knowledge Flow Interface
WEKA: The Knowledge Flow Interface
 
Aula pratica k-means-rp2009
Aula pratica k-means-rp2009Aula pratica k-means-rp2009
Aula pratica k-means-rp2009
 
Instructivo para inscribirse en el PROCADO
Instructivo para inscribirse en el PROCADOInstructivo para inscribirse en el PROCADO
Instructivo para inscribirse en el PROCADO
 
Project Weka
Project WekaProject Weka
Project Weka
 
Weka Project
Weka ProjectWeka Project
Weka Project
 
2. visualization in data mining
2. visualization in data mining2. visualization in data mining
2. visualization in data mining
 
Weka a tool_for_exploratory_data_mining
Weka a tool_for_exploratory_data_miningWeka a tool_for_exploratory_data_mining
Weka a tool_for_exploratory_data_mining
 
RapidMiner: Performance Validation And Visualization
RapidMiner:  Performance Validation And VisualizationRapidMiner:  Performance Validation And Visualization
RapidMiner: Performance Validation And Visualization
 
Sesión mat resolvemos problemas de equilibrio copia
Sesión mat resolvemos problemas de equilibrio   copiaSesión mat resolvemos problemas de equilibrio   copia
Sesión mat resolvemos problemas de equilibrio copia
 
WEKA:Output Knowledge Representation
WEKA:Output Knowledge RepresentationWEKA:Output Knowledge Representation
WEKA:Output Knowledge Representation
 
Data Mining with WEKA WEKA
Data Mining with WEKA WEKAData Mining with WEKA WEKA
Data Mining with WEKA WEKA
 
Weka_Manual_Sagar
Weka_Manual_SagarWeka_Manual_Sagar
Weka_Manual_Sagar
 

Similar a Data mining techniques using weka

DATA MINING on WEKA
DATA MINING on WEKADATA MINING on WEKA
DATA MINING on WEKA
satyamkhatri
 
Machine Learning with WEKA
Machine Learning with WEKAMachine Learning with WEKA
Machine Learning with WEKA
butest
 
Observations
ObservationsObservations
Observations
butest
 

Similar a Data mining techniques using weka (20)

Itb weka
Itb wekaItb weka
Itb weka
 
Itb weka nikhil
Itb weka nikhilItb weka nikhil
Itb weka nikhil
 
Weka Term Paper_VGSoM_10BM60011
Weka Term Paper_VGSoM_10BM60011Weka Term Paper_VGSoM_10BM60011
Weka Term Paper_VGSoM_10BM60011
 
Weka
Weka Weka
Weka
 
Weka term paper(siddharth 10 bm60086)
Weka term paper(siddharth 10 bm60086)Weka term paper(siddharth 10 bm60086)
Weka term paper(siddharth 10 bm60086)
 
Root cause of community problem for this discussion, you will i
Root cause of community problem for this discussion, you will iRoot cause of community problem for this discussion, you will i
Root cause of community problem for this discussion, you will i
 
DATA MINING on WEKA
DATA MINING on WEKADATA MINING on WEKA
DATA MINING on WEKA
 
Machine Learning with WEKA
Machine Learning with WEKAMachine Learning with WEKA
Machine Learning with WEKA
 
Feature extraction for classifying students based on theirac ademic performance
Feature extraction for classifying students based on theirac ademic performanceFeature extraction for classifying students based on theirac ademic performance
Feature extraction for classifying students based on theirac ademic performance
 
data mining with weka application
data mining with weka applicationdata mining with weka application
data mining with weka application
 
Know How to Create and Visualize a Decision Tree with Python.pdf
Know How to Create and Visualize a Decision Tree with Python.pdfKnow How to Create and Visualize a Decision Tree with Python.pdf
Know How to Create and Visualize a Decision Tree with Python.pdf
 
Introduction to weka
Introduction to wekaIntroduction to weka
Introduction to weka
 
Excel Datamining Addin Advanced
Excel Datamining Addin AdvancedExcel Datamining Addin Advanced
Excel Datamining Addin Advanced
 
Excel Datamining Addin Advanced
Excel Datamining Addin AdvancedExcel Datamining Addin Advanced
Excel Datamining Addin Advanced
 
Machine learning and decision trees
Machine learning and decision treesMachine learning and decision trees
Machine learning and decision trees
 
Clustering
ClusteringClustering
Clustering
 
Alacart Poor man's classification trees
Alacart Poor man's classification treesAlacart Poor man's classification trees
Alacart Poor man's classification trees
 
Advanced Excel Technologies In Early Development Applications
Advanced Excel Technologies In Early Development ApplicationsAdvanced Excel Technologies In Early Development Applications
Advanced Excel Technologies In Early Development Applications
 
Observations
ObservationsObservations
Observations
 
Lx3520322036
Lx3520322036Lx3520322036
Lx3520322036
 

Último

1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
QucHHunhnh
 
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
ZurliaSoop
 
Spellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please PractiseSpellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please Practise
AnaAcapella
 

Último (20)

Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)
 
General Principles of Intellectual Property: Concepts of Intellectual Proper...
General Principles of Intellectual Property: Concepts of Intellectual  Proper...General Principles of Intellectual Property: Concepts of Intellectual  Proper...
General Principles of Intellectual Property: Concepts of Intellectual Proper...
 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdf
 
Unit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptxUnit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptx
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
 
SOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning PresentationSOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning Presentation
 
How to Create and Manage Wizard in Odoo 17
How to Create and Manage Wizard in Odoo 17How to Create and Manage Wizard in Odoo 17
How to Create and Manage Wizard in Odoo 17
 
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17  How to Extend Models Using Mixin ClassesMixin Classes in Odoo 17  How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024
 
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptxSKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
 
Towards a code of practice for AI in AT.pptx
Towards a code of practice for AI in AT.pptxTowards a code of practice for AI in AT.pptx
Towards a code of practice for AI in AT.pptx
 
ICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptxICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptx
 
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The Basics
 
Unit-IV; Professional Sales Representative (PSR).pptx
Unit-IV; Professional Sales Representative (PSR).pptxUnit-IV; Professional Sales Representative (PSR).pptx
Unit-IV; Professional Sales Representative (PSR).pptx
 
Sociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning ExhibitSociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning Exhibit
 
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
 
Spellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please PractiseSpellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please Practise
 

Data mining techniques using weka

  • 1. ITB TERM PAPER Classification and Clutering Nitin Kumar Rathore 10BM60055
  • 2. Introduction Weka stands for Waikato Environment for knowledge analysis. Weka is software available for free used for machine learning. It is coded in Java and is developed by the University of Waikato, New Zealand. Weka workbench includes set of visualization tools and algorithms which is applied for better decision making through data analysis and predictive modeling. It also has a GUI (graphical user interface) for ease of use. It is developed in Java so is portable across platforms Weka has many applications and is used widely for research and educational purposes. Data mining functions can be done by weka involves classification, clustering, feature selection, data preprocessing, regression and visualization. Weka startup screen looks like: This is Weka GUI Chooser. It gives you four interfaces to work on Explorer: It is used for exploring the data with weka by providing access to all the facilities by the use of menues and forms Experimenter: Weka Experimenter allows you to create, analyse, modify and run large scale experiments. It can be used to answer question such as out of many schemes which is better (if there is) Knowledge flow: it has the same function as that of explorer. It supports incremental learning. It handles data on incremental basis. It uses incremental algorithms to process data. Simple CLI: CLI stands for command line interface. It just provides all the functionality through command line interface. Data Ming Techniques Out of the data mining techniques provided by the weka, classification, clustering, feature selection, data preprocessing, regression and visualization, this paper will demonstrate use of classification and clustering.
  • 3. Classification Classification creates a model based on which a new instance can be classified into the existing classes or determined classes. for example by creating a decision tree based on past sales we can determine how likely is a person to buy the product given all his attribute like disposable income, family strength, state/country etc. To start with classification you must use or create arff or csv (or any supported) file format. An arff file is a table. To create arff file from excel you just have to follow these steps Open the excel file. Remove headings Save as it as a csv file (comma delimited) file. Open the csv file ina text editor. Now write the relation name at the top of the file as: @relation <relation_name> The text inside the arrows, < and >, represents the text to be entered according to the requirement Leave a blank line and enter all the attributes, column heads, in the format: @attibute <attribute_name>(<attribute_values>). For example @attribute outlook (sunny, overcast, rainy) After entering all the attribute leave a blank line and write: @data This last line will appear just above comma separated data values of the file. Save it as <file_name>.arff The sample picture of arff file is shown below
  • 4. Classification example: Our goal is to create a decision tree using weka so that we can classify new or unknown iris flower samples. There are three kind or iris they are Iris setosa, Iris versicolor, Iris virginica. Data file: We have a data file containing attribute values for 150 Iris samples in arff format at this link: http://code.google.com/p/pwr-apw/downloads/detail?name=iris.arff. Concept behind the classification is the sepal and petal length and width help us to identify the unknown iris. The data files contain all the four attributes. The algorithm we are going to use to classify is weka’s J4.8 decision tree learner. Follow the underlying steps to classify: Open weka and choose explorer. Then open the downloaded arff file. Go to classify tab. Click “choose” and Choose J48 algorithm under trees section Left click on the chosen J48 algorithm to open Weka Generic Object Editor. Change the option saveInstanceData to true. Click ok. It allows you to find the classification process for each sample after building of the decision tree
  • 5. Click “Percentage Split” option in the “Test Options” section. It trains on the numerical percentage enters in the box and test on the rest of the data. Default value is 66% Click on “Start” to start classification. The output box named “classifier output” shows the output of classification. Output will look like this Now we will see the tree. Right click on the entry in “Result List” then click visualize tree.
  • 6. Decision tree will be visible in new window It gives the decision structure or flow of process to be followed during classification. For example if petal width is > 0.6, petal width <=1.7, petal length > 4.9 and petal width <= 1.5,it implies the iris is Virginica. Now look at the classifier output box. The rules describing the decision tree is described as given in the picture.
  • 7. As we can see in the decision tree we don’t require sepal length and width for classification. We require only petal length and width. Go to “classifier output box”. Scroll to the section “Evaluation on test split section”. We have split the data in two 66% for training and 33% for testing the model or tree. This section will be visible as follows Weka took 51 samples as 33% for test. Out of which 49 are classified correctly and 2 are classified incorrectly. If you look at the confusion matrix below in classifier output box. You will see all setosa(15) and all versicolor(19) are classified correctly but 2 out 0f 117 virginica are classified as versicolor. To find more information or to visualize how decision tree did on test samples. Right click on “Result list” and select “Visualize classifier errors”. A new window will open. Now as our tree has used on petal width and petal length to classify, we will select Petal Length for X axis and Petal Width for Y axis. Here “x” or cross represents correctly classified samples and squares represents incorrectly classified samples. Results of decision tree as Setosa, versicolor and virginica are represented in different colors as blue red and green. AS we can see why these are classified incorrectly as virginica, because they fall into the versicolor group considering petal length and width.
  • 8. The picture of window will appear as By left clicking on the squared instances circled black will give you information about that instance.
  • 9. As we can see 2 nodes out of 50 virginica samples (train +test) are classified incorrectly. Rest others are classified correctly for setosa and versicolor. There can be many reasons for it. Few are mentioned below. Attribute measurement error: It arises out of incorrect measurement of petal and sepal length and widths. Sample class identification error: It may arise because some classes are identified incorrectly. Say some versicolor are classified as virginica. Outlier samples: some infected or abnormal flower are sampled Inappropriate classification algorithm: the algorithm we chose is not suitable for the classification. Clustering Clustering is formation of groups on the basis of its attributes and is used to determine patterns from the data. Advantage of clustering over classification is each and every attribute is used to define a group but disadvantage of clustering is a user must know beforehand how many groups he wants to form. There are 2 types of clustering: Hierarchical clustering: This approach uses measure (generally squared Euclidean) of distance for identifying distance between objects to form a cluster. Process starts with all the objects as separate clusters. Then on the basis of shortest distance between clusters two objects closest are joined to form a cluster. And this cluster represents the new object. Now again the process continues until one cluster is formed or the required number of cluster is reached. Non-Hierarchical Clustering: It is the method of clustering in which partition of observations (say n in number) occur into k clusters. Each observation is assigned to nearest cluster and cluster mean is recalculated. In this paper we will study K-means clustering example. Applications of clustering includes Market segmentation Computer vision Geostatistics Understanding buyer behavior Data file: Data file talks about the BMW dealership. It contains data about how often customer makes a purchase, what cars they look at, how they walk through showroom and dealership. It contains 100 rows of data and where every attribute/column represent the steps that customer have achieved in their buying process. “1” represents they have reached this step whereas “0” represents they didn’t made it to this step. Download the data file from the
  • 10. link:http://www.ibm.com/developerworks/apps/download/index.jsp?contentid=487584&filen ame=os-weka2-Examples.zip&method=http&locale= Let us have a sneak peek into the data file. Now follow these steps to perform Clustering: Load the file into Weka by open file option under preprocess tab of weka explorer or by double clicking the file. The Weka explorer will look like
  • 11. To create clusters click on cluster tab. Click the command button “Choose” and select “SimpleKMeans”. Click on the text box next to choose button which displays the k means algorithm. It will open Weka GUI Generic Object Editor Change “numClusters” from 2 to 5. It define the number of clusters to be formed. Click ok Click Start to start clustering In “result List” box a entry will appear and Cluster output will display output of clustering. It will appear as follows. Cluster Results: Now we will have the clusters defined. You can have cluster data in a separate window by right clicking the entry in the “Result List” Box. There are 5 clusters formed named from “0” to “4“. If a attribute value for a cluster is “1” it means all the instances in the cluster have the same value “1” for that attribute. If a cluster has “0” values for an attribute it means all instances in the cluster have the “0” value for that attribute. To remind, the “0” value represent customer have not entered into that step of buying process and “1” represent customer have entered into the step.
  • 12. Clustered instances show how many instances belong to each cluster. Clustered instances is the heading given in the cluster output. For example in cluster “0” it have 26 instances or 26% instances (as there are 100 rows no. of instances is equal to percentage) The value for clusters in separate window is given in the picture below. Interpreting the clusters Cluster 0: It represents the group of non purchasers, as they may look for dealership, look for cars in a showroom but when it comes to purchasing a car they do nothing. This group just adds to cost but doesn’t bring any revenue. Cluster 1: This group is attracted towards M5 as it is quite visible that go straight towards the M5s ignoring 3Series car and paying no heed at all to Z4. They even
  • 13. don’t do the computer search. But as we can see this high footfall for does not bring sales accordingly. The reason for medium sales should be unearthed. Say if customer service is the problem we should increase the service quality over the M5 section by training sales executive better or if lack of no. of sales personnel to cater every customer is the problem we can provide more staff for the M5 section. Cluster 2: This group just contains 5 instances out of 100. They can be called “insignificant group”. They are not statistically important. We should not make any conclusion from such an insignificant group. It indicates we may reduce the no. of clusters Cluster 3: This is the group of customers we can call “sure shot buyers”. Because they will always buy a car. One thing to note is we should take care of their financing as they always go for financing. They lookout showroom for available cars and also do computer search for the available dealership. They generally don’t lookout for 3Series. It displays that we should make computer search for M5 and Z4 more visible and attractive in search results. Cluster 4: This group contains the ones that make least purchase after non- purchasers. They are the new ones in the category, because they don’t look for expensive cars like M5 instead lookout for 3Series. They walk into showrooms and they don’t involve in computer search. As we can see 50 percent of them get to the financing stage but only 32 percent end up buying a car. This means these are the ones buying their first BMW and know exactly their requirement and hence their car (3Series entry level model). They generally go for financing to afford the car. This means to increase the sales we should increase the conversion ratio from financing stage to purchasing stage. We should identify the problem there and take the appropriate step. For example making financing easier by collaborating with bank. By lowering the terms that repels customers.
  • 14. REFERENCES [1] Data mining by Jan H. witten, Eibe Frank and Mark A. Hall, 3rd edition, Morgan Kaufman Publisher [2]Tutorial for weka provided by university of Waikato, www.cs.waikato.ac.nz/ml/weka/ [3] Weka,Classification using decision trees based on Dr. Polczynski's Lecture, written by Prof. Andrzej Kochanski and Prof Marcin Perzyk, Faculty of Production Engineering, Warsaw University of Technology, Warsaw Poland, http://referensi.dosen.narotama.ac.id/files/2011/12/weka-tutorial-2.pdf [4] Classification via Decision Trees in WEKA, Computer science, Telecommunications, and Information systems, DePaul University, http://maya.cs.depaul.edu/classes/ect584/weka/classify.html [5] Data mining with WEKA, Part 2: classification and clustering, IBM developer works Michael Abernethy, http://www.ibm.com/developerworks/opensource/library/os- weka2/index.html?ca=drs-