SlideShare una empresa de Scribd logo
1 de 12
SUPPORT VECTOR MACHINE BY PARIN SHAH
SVM FOR LINEARLY SEPARABLE DATA Plot the points. Find the margin and support vectors. Find the hyperplane having maximum margin. Based on the computed margin value classify the new input data sets into different categories.
FIGURE REPRESENTING LINEARLY SEPARABLE DATA Figure representing the support vector and maximum margin hyper plane.                           (w · x) + b = +1 (positive labels)                           (w · x) + b = -1 (negative labels) (w · x) + b = 0 (hyperplane)   Margin       ::
SVM FOR NON LINEARLY SEPARABLE DATA
STEPS FOR NON LINEARLY SEPARABLE DATA 1.) Map into feature space. 2.) Use Polynomial kernel Φ(X1) = (X1, X1^2) to       map points. 3.) Compute the positive , negative and zero  hyperplane. 4.) We get the support vectors and the margin value       from it.  5.) Classify the new input values from margin value
KERNEL AND ITS TYPES. Computation of various points in the feature space can be very costly because feature space can be typically said to be infinite-dimensional. The kernel function is used for to reduce these cost because the data points appear in dot product and the kernel function are able to compute the inner products of these points.  By kernel function we can directly compute the data points through inner product without explicitly mapping on the feature space.
KERNEL AND ITS TYPES. 1.)  Polynomial kernel with degree d.     2.)  Radial basis function kernel with width s       3.)  Sigmoid with parameter k and q        4.)  Linear Kernel  K(x,y)= x' * y
SPARSE MATRIX AND SPARSE DATA Simple data structure of 2-dimensional array storing non-zero values. Sparse Data iterates over non-zero values only. Stores the values, row number and column number of non-zero values from the matrix. Easy to compute the inner product of  zeroes. Speed of SVM algorithms increases by use of Sparse data.
STORING SPARSE DATA Dictionary of keys (DOK) DOK represents non-zero values as a dictionary mapping (row, column) tuples to values   List of lists (LIL) LIL stores one list per row, where each entry stores a column index and value. Typically, these entries are kept sorted by column index for faster lookup.    Coordinate list (COO) COO stores a list of (row, column, value) tuples. In this the entries are sorted (row index  then column index  value) to improve random access times.    Yale format
STORING SPARSE DATA The Yale Sparse Matrix Format stores an initial sparse m×n matrix,      Where M = row in three one-dimensional arrays.                  NNZ = number of nonzero entries of M.                  Array A = length= NNZ, and holds all nonzero entries. Order-top bottom right left.                 Array IA= length is m + 1.  IA(i) contains the index in A of the first nonzero element of row i.                                       Row i of the original matrix extends from A(IA(i)) to A(IA(i+1)-1), i.e. from the start                                        of one row to the last index before the start of the next.                  Array JA= column index of each element of A, length= NNZ. EXAMPLES:::  [ 1 2 0 0 ]  [ 0 3 9 0 ]  [ 0 1 4 0 ]       So computing it we get values as,          A  = [ 1 2 3 9 1 4 ]  ,    IA = [ 0 2 4 6 ]      and  JA = [ 0 1 1 2 1 2 ].  
ADVANTAGES OF SVM In high dimensional spaces Support Vector Machines are very effective. When number of dimensions is greater than the number of samples in such cases also it is found to be very effective. Memory Efficient because it uses subset of training points(support vectors) as decisive factors for classification. Versatile:  For different decision function we can define different kernel as long as they provide correct result. Depending upon our requirement we can define our own kernel.
DISADVANTAGES OF SVM If the number of features is much greater than the number of samples, the method is likely to give poor performances. It is useful for small training samples. SVMs do not directly provide probability estimates, so these must be calculated using indirect techniques. We can have Non-traditional data like strings and trees as input to SVM instead of featured vectors. Should select appropriate kernel for their project according to requirement

Más contenido relacionado

La actualidad más candente

Support vector machine
Support vector machineSupport vector machine
Support vector machine
Musa Hawamdah
 

La actualidad más candente (20)

Support vector machine
Support vector machineSupport vector machine
Support vector machine
 
Artificial Bee Colony algorithm
Artificial Bee Colony algorithmArtificial Bee Colony algorithm
Artificial Bee Colony algorithm
 
Support vector machines
Support vector machinesSupport vector machines
Support vector machines
 
Logistic regression in Machine Learning
Logistic regression in Machine LearningLogistic regression in Machine Learning
Logistic regression in Machine Learning
 
Linear discriminant analysis
Linear discriminant analysisLinear discriminant analysis
Linear discriminant analysis
 
Introdution and designing a learning system
Introdution and designing a learning systemIntrodution and designing a learning system
Introdution and designing a learning system
 
Random forest
Random forestRandom forest
Random forest
 
Svm
SvmSvm
Svm
 
Support vector machines (svm)
Support vector machines (svm)Support vector machines (svm)
Support vector machines (svm)
 
2.6 support vector machines and associative classifiers revised
2.6 support vector machines and associative classifiers revised2.6 support vector machines and associative classifiers revised
2.6 support vector machines and associative classifiers revised
 
svm classification
svm classificationsvm classification
svm classification
 
Class imbalance problem1
Class imbalance problem1Class imbalance problem1
Class imbalance problem1
 
Support Vector Machines (SVM)
Support Vector Machines (SVM)Support Vector Machines (SVM)
Support Vector Machines (SVM)
 
Naive Bayes Classifier
Naive Bayes ClassifierNaive Bayes Classifier
Naive Bayes Classifier
 
Support Vector Machines ( SVM )
Support Vector Machines ( SVM ) Support Vector Machines ( SVM )
Support Vector Machines ( SVM )
 
SVM
SVM SVM
SVM
 
Feature selection
Feature selectionFeature selection
Feature selection
 
Machine Learning - Dataset Preparation
Machine Learning - Dataset PreparationMachine Learning - Dataset Preparation
Machine Learning - Dataset Preparation
 
Support vector machine
Support vector machineSupport vector machine
Support vector machine
 
Data Mining: Concepts and Techniques (3rd ed.) - Chapter 3 preprocessing
Data Mining:  Concepts and Techniques (3rd ed.)- Chapter 3 preprocessingData Mining:  Concepts and Techniques (3rd ed.)- Chapter 3 preprocessing
Data Mining: Concepts and Techniques (3rd ed.) - Chapter 3 preprocessing
 

Similar a Svm Presentation

Homework Assignment – Array Technical DocumentWrite a technical .pdf
Homework Assignment – Array Technical DocumentWrite a technical .pdfHomework Assignment – Array Technical DocumentWrite a technical .pdf
Homework Assignment – Array Technical DocumentWrite a technical .pdf
aroraopticals15
 
data structure unit -1_170434dd7400.pptx
data structure unit -1_170434dd7400.pptxdata structure unit -1_170434dd7400.pptx
data structure unit -1_170434dd7400.pptx
coc7987515756
 
Arrays and library functions
Arrays and library functionsArrays and library functions
Arrays and library functions
Swarup Kumar Boro
 
Data Structure Midterm Lesson Arrays
Data Structure Midterm Lesson ArraysData Structure Midterm Lesson Arrays
Data Structure Midterm Lesson Arrays
Maulen Bale
 

Similar a Svm Presentation (20)

Module 4- Arrays and Strings
Module 4- Arrays and StringsModule 4- Arrays and Strings
Module 4- Arrays and Strings
 
Numpy.pdf
Numpy.pdfNumpy.pdf
Numpy.pdf
 
Homework Assignment – Array Technical DocumentWrite a technical .pdf
Homework Assignment – Array Technical DocumentWrite a technical .pdfHomework Assignment – Array Technical DocumentWrite a technical .pdf
Homework Assignment – Array Technical DocumentWrite a technical .pdf
 
data structure unit -1_170434dd7400.pptx
data structure unit -1_170434dd7400.pptxdata structure unit -1_170434dd7400.pptx
data structure unit -1_170434dd7400.pptx
 
Numpy ndarrays.pdf
Numpy ndarrays.pdfNumpy ndarrays.pdf
Numpy ndarrays.pdf
 
arrays.pptx
arrays.pptxarrays.pptx
arrays.pptx
 
NumPy.pptx
NumPy.pptxNumPy.pptx
NumPy.pptx
 
Arrays and library functions
Arrays and library functionsArrays and library functions
Arrays and library functions
 
Arrays with Numpy, Computer Graphics
Arrays with Numpy, Computer GraphicsArrays with Numpy, Computer Graphics
Arrays with Numpy, Computer Graphics
 
Arrays in Data Structure and Algorithm
Arrays in Data Structure and Algorithm Arrays in Data Structure and Algorithm
Arrays in Data Structure and Algorithm
 
Basic of array and data structure, data structure basics, array, address calc...
Basic of array and data structure, data structure basics, array, address calc...Basic of array and data structure, data structure basics, array, address calc...
Basic of array and data structure, data structure basics, array, address calc...
 
Structured Data Type Arrays
Structured Data Type ArraysStructured Data Type Arrays
Structured Data Type Arrays
 
Introduction to Arrays in C
Introduction to Arrays in CIntroduction to Arrays in C
Introduction to Arrays in C
 
Data Structure Midterm Lesson Arrays
Data Structure Midterm Lesson ArraysData Structure Midterm Lesson Arrays
Data Structure Midterm Lesson Arrays
 
UNIT-5_Array in c_part1.pptx
UNIT-5_Array in c_part1.pptxUNIT-5_Array in c_part1.pptx
UNIT-5_Array in c_part1.pptx
 
Arrays
ArraysArrays
Arrays
 
Pooja
PoojaPooja
Pooja
 
Pooja
PoojaPooja
Pooja
 
unit 2.pptx
unit 2.pptxunit 2.pptx
unit 2.pptx
 
CE344L-200365-Lab2.pdf
CE344L-200365-Lab2.pdfCE344L-200365-Lab2.pdf
CE344L-200365-Lab2.pdf
 

Svm Presentation

  • 1. SUPPORT VECTOR MACHINE BY PARIN SHAH
  • 2. SVM FOR LINEARLY SEPARABLE DATA Plot the points. Find the margin and support vectors. Find the hyperplane having maximum margin. Based on the computed margin value classify the new input data sets into different categories.
  • 3. FIGURE REPRESENTING LINEARLY SEPARABLE DATA Figure representing the support vector and maximum margin hyper plane. (w · x) + b = +1 (positive labels) (w · x) + b = -1 (negative labels) (w · x) + b = 0 (hyperplane)   Margin ::
  • 4. SVM FOR NON LINEARLY SEPARABLE DATA
  • 5. STEPS FOR NON LINEARLY SEPARABLE DATA 1.) Map into feature space. 2.) Use Polynomial kernel Φ(X1) = (X1, X1^2) to map points. 3.) Compute the positive , negative and zero hyperplane. 4.) We get the support vectors and the margin value from it. 5.) Classify the new input values from margin value
  • 6. KERNEL AND ITS TYPES. Computation of various points in the feature space can be very costly because feature space can be typically said to be infinite-dimensional. The kernel function is used for to reduce these cost because the data points appear in dot product and the kernel function are able to compute the inner products of these points. By kernel function we can directly compute the data points through inner product without explicitly mapping on the feature space.
  • 7. KERNEL AND ITS TYPES. 1.) Polynomial kernel with degree d.     2.) Radial basis function kernel with width s      3.) Sigmoid with parameter k and q       4.) Linear Kernel  K(x,y)= x' * y
  • 8. SPARSE MATRIX AND SPARSE DATA Simple data structure of 2-dimensional array storing non-zero values. Sparse Data iterates over non-zero values only. Stores the values, row number and column number of non-zero values from the matrix. Easy to compute the inner product of zeroes. Speed of SVM algorithms increases by use of Sparse data.
  • 9. STORING SPARSE DATA Dictionary of keys (DOK) DOK represents non-zero values as a dictionary mapping (row, column) tuples to values   List of lists (LIL) LIL stores one list per row, where each entry stores a column index and value. Typically, these entries are kept sorted by column index for faster lookup.   Coordinate list (COO) COO stores a list of (row, column, value) tuples. In this the entries are sorted (row index  then column index  value) to improve random access times.   Yale format
  • 10. STORING SPARSE DATA The Yale Sparse Matrix Format stores an initial sparse m×n matrix, Where M = row in three one-dimensional arrays. NNZ = number of nonzero entries of M. Array A = length= NNZ, and holds all nonzero entries. Order-top bottom right left. Array IA= length is m + 1. IA(i) contains the index in A of the first nonzero element of row i. Row i of the original matrix extends from A(IA(i)) to A(IA(i+1)-1), i.e. from the start of one row to the last index before the start of the next. Array JA= column index of each element of A, length= NNZ. EXAMPLES::: [ 1 2 0 0 ] [ 0 3 9 0 ] [ 0 1 4 0 ]   So computing it we get values as, A = [ 1 2 3 9 1 4 ] , IA = [ 0 2 4 6 ] and JA = [ 0 1 1 2 1 2 ].  
  • 11. ADVANTAGES OF SVM In high dimensional spaces Support Vector Machines are very effective. When number of dimensions is greater than the number of samples in such cases also it is found to be very effective. Memory Efficient because it uses subset of training points(support vectors) as decisive factors for classification. Versatile: For different decision function we can define different kernel as long as they provide correct result. Depending upon our requirement we can define our own kernel.
  • 12. DISADVANTAGES OF SVM If the number of features is much greater than the number of samples, the method is likely to give poor performances. It is useful for small training samples. SVMs do not directly provide probability estimates, so these must be calculated using indirect techniques. We can have Non-traditional data like strings and trees as input to SVM instead of featured vectors. Should select appropriate kernel for their project according to requirement