SlideShare una empresa de Scribd logo
1 de 20
Curse of Dimensionality

                            By,
                 Nikhil Sharma
What is it?
   In applied mathematics, curse of
    dimensionality (a term coined by Richard E.
    Bellman), also known as the Hughes
    effect (named after Gordon F. Hughes),refers to
    the problem caused by the exponential increase
    in volume associated with adding extra
    dimensions to a mathematical space.
Problems
•   High Dimensional Data is difficult to work with
    since:
      Adding more features can increase the noise, and
       hence the error
      There usually aren’t enough observations to get
       good estimates
•   This causes:
      Increase in running time
      Overfitting
      Number of Samples Required
An Example
Representation of 10% sample probability space
    (i) 2-D                       (ii)3-D




The Number of Points Would Need to Increase Exponentially
              to Maintain a Given Accuracy.
 10n samples would be required for a n-dimension problem.
Curse of Dimensionality:
               Complexity
   Complexity (running time) increases with
    dimension d
    A lot of methods have at least
                                        O(nd 2 )
    complexity, where n is the number of samples
   So as d becomes large, O(nd2) complexity may
    be too costly
Curse of Dimensionality:
               Overfitting
   Paradox: If n < d2 we are better off assuming
    that features are uncorrelated, even if we know
    this assumption is wrong
   We are likely to avoid overfitting because we fit a
    model with less parameters:
Curse of Dimensionality: Number
               of Samples
 Suppose we want to use the nearest neighbor
  approach with k = 1 (1NN)
 This feature is not discriminative, i.e. it does not
 separate the classes well
 Suppose we start with only one feature




 We decide to use 2 features. For the 1NN method to
  work well, need a lot of samples, i.e. samples have to
  be dense
 To maintain the same density as in 1D (9 samples per
  unit length), how many samples do we need?
Curse of Dimensionality: Number
           of Samples
   We need 92 samples to maintain the same
    density as in 1D
Curse of Dimensionality: Number
           of Samples
   Of course, when we go from 1 feature to 2, no
    one gives us more samples, we still have 9




   This is way too sparse for 1NN to work well
Curse of Dimensionality: Number
               of Samples
   Things go from bad to worse if we decide to use 3
    features:




   If 9 was dense enough in 1D, in 3D we need 93=729
    samples!
Curse of Dimensionality: Number
           of Samples
   In general, if n samples is dense enough in 1D

   Then in d dimensions we need n d samples!
   And n d grows really really fast as a function of d
   Common pitfall:
       If we can’t solve a problem with a few features, adding
         more features seems like a good idea
       However the number of samples usually stays the same
       The method with more features is likely to perform worse
        instead of expected better
Curse of Dimensionality: Number
           of Samples
   For a fixed number of samples, as we add
    features, the graph of classification error:




   Thus for each fixed sample size n, there is the
    optimal number of features to use
The Curse of Dimensionality
 We should try to avoid creating lot of features
 Often no choice, problem starts with many features
 Example: Face Detection
     One sample point is k by m array of pixels




       Feature extraction is not trivial, usually every
       pixel is taken as a feature
        Typical dimension is 20 by 20 = 400
        Suppose 10 samples are dense enough for 1 dimension.
        Need only 10400samples
The Curse of Dimensionality
   Face Detection, dimension of one sample point is km




   The fact that we set up the problem with km dimensions
    (features) does not mean it is really a km-dimensional
    problem
    Most likely we are not setting the problem up with the right
    features
    If we used better features, we are likely need much less than
    km-dimensions
    Space of all k by m images has km dimensions
    Space of all k by m faces must be much smaller, since faces
    form a tiny fraction of all possible images
Dimensionality Reduction
   We want to reduce the number of
    dimensions because:
      Efficiency.
      Measurement costs.
      Storage costs.
      Computation costs.
      Classification performance.
      Ease of interpretation/modeling.
Principal Components
   The idea is to project onto the subspace which accounts for
    most of the variance.

   This is accomplished by projecting onto the eigenvectors of
    the covariance matrix associated with the largest
    eigenvalues.

   This is generally not the projection best suited for
    classification.

   It can be a useful method for providing a first-cut reduction
    in dimension from a high dimensional space
Feature Combination as a method
    to reduce Dimensionality
   High dimensionality is challenging and redundant
    It is natural to try to reduce dimensionality
    Reduce dimensionality by feature combination: combine
    old features x to create new features y




   For Example,



   Ideally, the new vector y should retain from x all
    information important for classification
Principle Component Analysis
                  (PCA)
   Main idea: seek most accurate data representation in a
    lower dimensional space
   Example in 2-D
        ◦ Project data to 1-D subspace (a line) which minimize the
          projection error




   Notice that the the good line to use for projection lies in
    the direction of largest variance
PCA: Approximation of Elliptical
         Cloud in 3D
Thank You!
   The End.

Más contenido relacionado

La actualidad más candente

Machine Learning-Linear regression
Machine Learning-Linear regressionMachine Learning-Linear regression
Machine Learning-Linear regressionkishanthkumaar
 
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...Simplilearn
 
Machine Learning - Dataset Preparation
Machine Learning - Dataset PreparationMachine Learning - Dataset Preparation
Machine Learning - Dataset PreparationAndrew Ferlitsch
 
Unsupervised learning clustering
Unsupervised learning clusteringUnsupervised learning clustering
Unsupervised learning clusteringArshad Farhad
 
K-Nearest Neighbor Classifier
K-Nearest Neighbor ClassifierK-Nearest Neighbor Classifier
K-Nearest Neighbor ClassifierNeha Kulkarni
 
Decision trees in Machine Learning
Decision trees in Machine Learning Decision trees in Machine Learning
Decision trees in Machine Learning Mohammad Junaid Khan
 
Intro to Feature Selection
Intro to Feature SelectionIntro to Feature Selection
Intro to Feature Selectionchenhm
 
Unsupervised learning (clustering)
Unsupervised learning (clustering)Unsupervised learning (clustering)
Unsupervised learning (clustering)Pravinkumar Landge
 
Machine learning with ADA Boost
Machine learning with ADA BoostMachine learning with ADA Boost
Machine learning with ADA BoostAman Patel
 
DBSCAN : A Clustering Algorithm
DBSCAN : A Clustering AlgorithmDBSCAN : A Clustering Algorithm
DBSCAN : A Clustering AlgorithmPınar Yahşi
 
Backpropagation And Gradient Descent In Neural Networks | Neural Network Tuto...
Backpropagation And Gradient Descent In Neural Networks | Neural Network Tuto...Backpropagation And Gradient Descent In Neural Networks | Neural Network Tuto...
Backpropagation And Gradient Descent In Neural Networks | Neural Network Tuto...Simplilearn
 
Lecture 6: Ensemble Methods
Lecture 6: Ensemble Methods Lecture 6: Ensemble Methods
Lecture 6: Ensemble Methods Marina Santini
 
Deep learning - A Visual Introduction
Deep learning - A Visual IntroductionDeep learning - A Visual Introduction
Deep learning - A Visual IntroductionLukas Masuch
 
Decision Tree Algorithm With Example | Decision Tree In Machine Learning | Da...
Decision Tree Algorithm With Example | Decision Tree In Machine Learning | Da...Decision Tree Algorithm With Example | Decision Tree In Machine Learning | Da...
Decision Tree Algorithm With Example | Decision Tree In Machine Learning | Da...Simplilearn
 
K Means Clustering Algorithm | K Means Clustering Example | Machine Learning ...
K Means Clustering Algorithm | K Means Clustering Example | Machine Learning ...K Means Clustering Algorithm | K Means Clustering Example | Machine Learning ...
K Means Clustering Algorithm | K Means Clustering Example | Machine Learning ...Simplilearn
 
Computational learning theory
Computational learning theoryComputational learning theory
Computational learning theoryswapnac12
 

La actualidad más candente (20)

Machine Learning-Linear regression
Machine Learning-Linear regressionMachine Learning-Linear regression
Machine Learning-Linear regression
 
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
 
Introduction to Deep Learning
Introduction to Deep Learning Introduction to Deep Learning
Introduction to Deep Learning
 
Machine Learning - Dataset Preparation
Machine Learning - Dataset PreparationMachine Learning - Dataset Preparation
Machine Learning - Dataset Preparation
 
Unsupervised learning clustering
Unsupervised learning clusteringUnsupervised learning clustering
Unsupervised learning clustering
 
K-Nearest Neighbor Classifier
K-Nearest Neighbor ClassifierK-Nearest Neighbor Classifier
K-Nearest Neighbor Classifier
 
Decision trees in Machine Learning
Decision trees in Machine Learning Decision trees in Machine Learning
Decision trees in Machine Learning
 
KNN
KNN KNN
KNN
 
Intro to Feature Selection
Intro to Feature SelectionIntro to Feature Selection
Intro to Feature Selection
 
Unsupervised learning (clustering)
Unsupervised learning (clustering)Unsupervised learning (clustering)
Unsupervised learning (clustering)
 
Machine learning with ADA Boost
Machine learning with ADA BoostMachine learning with ADA Boost
Machine learning with ADA Boost
 
DBSCAN : A Clustering Algorithm
DBSCAN : A Clustering AlgorithmDBSCAN : A Clustering Algorithm
DBSCAN : A Clustering Algorithm
 
Transfer Learning
Transfer LearningTransfer Learning
Transfer Learning
 
Backpropagation And Gradient Descent In Neural Networks | Neural Network Tuto...
Backpropagation And Gradient Descent In Neural Networks | Neural Network Tuto...Backpropagation And Gradient Descent In Neural Networks | Neural Network Tuto...
Backpropagation And Gradient Descent In Neural Networks | Neural Network Tuto...
 
Lecture 6: Ensemble Methods
Lecture 6: Ensemble Methods Lecture 6: Ensemble Methods
Lecture 6: Ensemble Methods
 
Deep learning - A Visual Introduction
Deep learning - A Visual IntroductionDeep learning - A Visual Introduction
Deep learning - A Visual Introduction
 
Decision Tree Algorithm With Example | Decision Tree In Machine Learning | Da...
Decision Tree Algorithm With Example | Decision Tree In Machine Learning | Da...Decision Tree Algorithm With Example | Decision Tree In Machine Learning | Da...
Decision Tree Algorithm With Example | Decision Tree In Machine Learning | Da...
 
K Means Clustering Algorithm | K Means Clustering Example | Machine Learning ...
K Means Clustering Algorithm | K Means Clustering Example | Machine Learning ...K Means Clustering Algorithm | K Means Clustering Example | Machine Learning ...
K Means Clustering Algorithm | K Means Clustering Example | Machine Learning ...
 
Computational learning theory
Computational learning theoryComputational learning theory
Computational learning theory
 
K - Nearest neighbor ( KNN )
K - Nearest neighbor  ( KNN )K - Nearest neighbor  ( KNN )
K - Nearest neighbor ( KNN )
 

Similar a Curse of dimensionality

17- Kernels and Clustering.pptx
17- Kernels and Clustering.pptx17- Kernels and Clustering.pptx
17- Kernels and Clustering.pptxssuser2023c6
 
An Efficient Method of Partitioning High Volumes of Multidimensional Data for...
An Efficient Method of Partitioning High Volumes of Multidimensional Data for...An Efficient Method of Partitioning High Volumes of Multidimensional Data for...
An Efficient Method of Partitioning High Volumes of Multidimensional Data for...IJERA Editor
 
Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tu...
Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tu...Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tu...
Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tu...San Kim
 
Machine Learning lecture6(regularization)
Machine Learning lecture6(regularization)Machine Learning lecture6(regularization)
Machine Learning lecture6(regularization)cairo university
 
Dimensionality Reduction
Dimensionality ReductionDimensionality Reduction
Dimensionality ReductionSaad Elbeleidy
 
Density maximization for improving graph matching with its applications
Density maximization for improving graph matching with its applicationsDensity maximization for improving graph matching with its applications
Density maximization for improving graph matching with its applicationsI3E Technologies
 
Dimensionality Reduction | Machine Learning | CloudxLab
Dimensionality Reduction | Machine Learning | CloudxLabDimensionality Reduction | Machine Learning | CloudxLab
Dimensionality Reduction | Machine Learning | CloudxLabCloudxLab
 
Machine learning using matlab.pdf
Machine learning using matlab.pdfMachine learning using matlab.pdf
Machine learning using matlab.pdfppvijith
 
17 large scale machine learning
17 large scale machine learning17 large scale machine learning
17 large scale machine learningTanmayVijay1
 
Web image annotation by diffusion maps manifold learning algorithm
Web image annotation by diffusion maps manifold learning algorithmWeb image annotation by diffusion maps manifold learning algorithm
Web image annotation by diffusion maps manifold learning algorithmijfcstjournal
 
Making BIG DATA smaller
Making BIG DATA smallerMaking BIG DATA smaller
Making BIG DATA smallerTony Tran
 
Hands-On Machine Learning with Scikit-Learn and TensorFlow - Chapter8
Hands-On Machine Learning with Scikit-Learn and TensorFlow - Chapter8Hands-On Machine Learning with Scikit-Learn and TensorFlow - Chapter8
Hands-On Machine Learning with Scikit-Learn and TensorFlow - Chapter8Hakky St
 
"Designing CNN Algorithms for Real-time Applications," a Presentation from Al...
"Designing CNN Algorithms for Real-time Applications," a Presentation from Al..."Designing CNN Algorithms for Real-time Applications," a Presentation from Al...
"Designing CNN Algorithms for Real-time Applications," a Presentation from Al...Edge AI and Vision Alliance
 

Similar a Curse of dimensionality (20)

PCA.pptx
PCA.pptxPCA.pptx
PCA.pptx
 
17- Kernels and Clustering.pptx
17- Kernels and Clustering.pptx17- Kernels and Clustering.pptx
17- Kernels and Clustering.pptx
 
An Efficient Method of Partitioning High Volumes of Multidimensional Data for...
An Efficient Method of Partitioning High Volumes of Multidimensional Data for...An Efficient Method of Partitioning High Volumes of Multidimensional Data for...
An Efficient Method of Partitioning High Volumes of Multidimensional Data for...
 
Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tu...
Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tu...Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tu...
Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tu...
 
Machine Learning lecture6(regularization)
Machine Learning lecture6(regularization)Machine Learning lecture6(regularization)
Machine Learning lecture6(regularization)
 
Clique and sting
Clique and stingClique and sting
Clique and sting
 
Dimensionality Reduction
Dimensionality ReductionDimensionality Reduction
Dimensionality Reduction
 
Density maximization for improving graph matching with its applications
Density maximization for improving graph matching with its applicationsDensity maximization for improving graph matching with its applications
Density maximization for improving graph matching with its applications
 
Dimensionality Reduction | Machine Learning | CloudxLab
Dimensionality Reduction | Machine Learning | CloudxLabDimensionality Reduction | Machine Learning | CloudxLab
Dimensionality Reduction | Machine Learning | CloudxLab
 
Machine learning using matlab.pdf
Machine learning using matlab.pdfMachine learning using matlab.pdf
Machine learning using matlab.pdf
 
deep CNN vs conventional ML
deep CNN vs conventional MLdeep CNN vs conventional ML
deep CNN vs conventional ML
 
17 large scale machine learning
17 large scale machine learning17 large scale machine learning
17 large scale machine learning
 
Explore ml day 2
Explore ml day 2Explore ml day 2
Explore ml day 2
 
Web image annotation by diffusion maps manifold learning algorithm
Web image annotation by diffusion maps manifold learning algorithmWeb image annotation by diffusion maps manifold learning algorithm
Web image annotation by diffusion maps manifold learning algorithm
 
Making BIG DATA smaller
Making BIG DATA smallerMaking BIG DATA smaller
Making BIG DATA smaller
 
Hands-On Machine Learning with Scikit-Learn and TensorFlow - Chapter8
Hands-On Machine Learning with Scikit-Learn and TensorFlow - Chapter8Hands-On Machine Learning with Scikit-Learn and TensorFlow - Chapter8
Hands-On Machine Learning with Scikit-Learn and TensorFlow - Chapter8
 
04 numerical
04 numerical04 numerical
04 numerical
 
"Designing CNN Algorithms for Real-time Applications," a Presentation from Al...
"Designing CNN Algorithms for Real-time Applications," a Presentation from Al..."Designing CNN Algorithms for Real-time Applications," a Presentation from Al...
"Designing CNN Algorithms for Real-time Applications," a Presentation from Al...
 
lec10svm.ppt
lec10svm.pptlec10svm.ppt
lec10svm.ppt
 
Svm ms
Svm msSvm ms
Svm ms
 

Más de Nikhil Sharma

Más de Nikhil Sharma (6)

Digital Life
Digital LifeDigital Life
Digital Life
 
B tree short
B tree shortB tree short
B tree short
 
B tree long
B tree longB tree long
B tree long
 
India
IndiaIndia
India
 
Impurities in wastewater & problems caused by it
Impurities in wastewater & problems caused by itImpurities in wastewater & problems caused by it
Impurities in wastewater & problems caused by it
 
Asymptotic notations
Asymptotic notationsAsymptotic notations
Asymptotic notations
 

Último

04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptxHampshireHUG
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptxLBM Solutions
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhisoniya singh
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxKatpro Technologies
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsEnterprise Knowledge
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphSIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphNeo4j
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreternaman860154
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Allon Mureinik
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdfhans926745
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersThousandEyes
 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024Scott Keck-Warren
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machinePadma Pradeep
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...shyamraj55
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure servicePooja Nehwal
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationSafe Software
 

Último (20)

04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptx
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphSIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food Manufacturing
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machine
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
 

Curse of dimensionality

  • 1. Curse of Dimensionality By, Nikhil Sharma
  • 2. What is it?  In applied mathematics, curse of dimensionality (a term coined by Richard E. Bellman), also known as the Hughes effect (named after Gordon F. Hughes),refers to the problem caused by the exponential increase in volume associated with adding extra dimensions to a mathematical space.
  • 3. Problems • High Dimensional Data is difficult to work with since: Adding more features can increase the noise, and hence the error There usually aren’t enough observations to get good estimates • This causes: Increase in running time Overfitting Number of Samples Required
  • 4. An Example Representation of 10% sample probability space (i) 2-D (ii)3-D The Number of Points Would Need to Increase Exponentially to Maintain a Given Accuracy. 10n samples would be required for a n-dimension problem.
  • 5. Curse of Dimensionality: Complexity  Complexity (running time) increases with dimension d  A lot of methods have at least O(nd 2 ) complexity, where n is the number of samples  So as d becomes large, O(nd2) complexity may be too costly
  • 6. Curse of Dimensionality: Overfitting  Paradox: If n < d2 we are better off assuming that features are uncorrelated, even if we know this assumption is wrong  We are likely to avoid overfitting because we fit a model with less parameters:
  • 7. Curse of Dimensionality: Number of Samples  Suppose we want to use the nearest neighbor approach with k = 1 (1NN)  This feature is not discriminative, i.e. it does not  separate the classes well  Suppose we start with only one feature  We decide to use 2 features. For the 1NN method to work well, need a lot of samples, i.e. samples have to be dense  To maintain the same density as in 1D (9 samples per unit length), how many samples do we need?
  • 8. Curse of Dimensionality: Number of Samples  We need 92 samples to maintain the same density as in 1D
  • 9. Curse of Dimensionality: Number of Samples  Of course, when we go from 1 feature to 2, no one gives us more samples, we still have 9  This is way too sparse for 1NN to work well
  • 10. Curse of Dimensionality: Number of Samples  Things go from bad to worse if we decide to use 3 features:  If 9 was dense enough in 1D, in 3D we need 93=729 samples!
  • 11. Curse of Dimensionality: Number of Samples  In general, if n samples is dense enough in 1D  Then in d dimensions we need n d samples!  And n d grows really really fast as a function of d  Common pitfall:  If we can’t solve a problem with a few features, adding more features seems like a good idea  However the number of samples usually stays the same  The method with more features is likely to perform worse instead of expected better
  • 12. Curse of Dimensionality: Number of Samples  For a fixed number of samples, as we add features, the graph of classification error:  Thus for each fixed sample size n, there is the optimal number of features to use
  • 13. The Curse of Dimensionality  We should try to avoid creating lot of features  Often no choice, problem starts with many features  Example: Face Detection  One sample point is k by m array of pixels  Feature extraction is not trivial, usually every  pixel is taken as a feature  Typical dimension is 20 by 20 = 400  Suppose 10 samples are dense enough for 1 dimension. Need only 10400samples
  • 14. The Curse of Dimensionality  Face Detection, dimension of one sample point is km  The fact that we set up the problem with km dimensions (features) does not mean it is really a km-dimensional problem  Most likely we are not setting the problem up with the right features  If we used better features, we are likely need much less than km-dimensions  Space of all k by m images has km dimensions  Space of all k by m faces must be much smaller, since faces form a tiny fraction of all possible images
  • 15. Dimensionality Reduction  We want to reduce the number of dimensions because:  Efficiency.  Measurement costs.  Storage costs.  Computation costs.  Classification performance.  Ease of interpretation/modeling.
  • 16. Principal Components  The idea is to project onto the subspace which accounts for most of the variance.  This is accomplished by projecting onto the eigenvectors of the covariance matrix associated with the largest eigenvalues.  This is generally not the projection best suited for classification.  It can be a useful method for providing a first-cut reduction in dimension from a high dimensional space
  • 17. Feature Combination as a method to reduce Dimensionality  High dimensionality is challenging and redundant  It is natural to try to reduce dimensionality  Reduce dimensionality by feature combination: combine old features x to create new features y  For Example,  Ideally, the new vector y should retain from x all information important for classification
  • 18. Principle Component Analysis (PCA)  Main idea: seek most accurate data representation in a lower dimensional space  Example in 2-D ◦ Project data to 1-D subspace (a line) which minimize the projection error  Notice that the the good line to use for projection lies in the direction of largest variance
  • 19. PCA: Approximation of Elliptical Cloud in 3D
  • 20. Thank You! The End.

Notas del editor

  1. Let&apos;s say you have a straight line 100 yards long and you dropped a penny somewhere on it. It wouldn&apos;t be too hard to find. You walk along the line and it takes two minutes.Now let&apos;s say you have a square 100 yards across and you dropped a penny somewhere on it. It would be pretty hard, like searching across two football fields stuck together. It could take days.Now a cube 100 yards across. That&apos;s like searching a 30-story building the size of a football stadium. Ugh.The difficulty of searching through the space gets a *lot* harder as you have more dimensions. You might not realize this intuitively when it&apos;s just stated in mathematical formulas, since they all have the same &quot;width&quot;. That&apos;s the curse of dimensionality. It gets to have a name because it is unintuitive, useful, and yet simple.
  2. Let the complete probability space for one variable be represented by the unit interval (0, 1), and imagine drawing ten samples along that interval. Each sample would then have to represent 10% of the probability space (on average). Now consider a second variable defined on another, orthogonal, (0, 1) interval, also being represented by ten samples. We have produced 10 (x1 , x2 ) points on a plane defined by the orthogonal x1 , x2 lines, and representing a new probability space. But the new space has 10 ? 10 = 100 area units, so each of the ten points now represents only 1% of the probability space. It would require 100 points for each point to represent the same 10% of the probability space that was represented by 10 points in only one dimension. This is illustrated below, where the 10 points are not drawn at random to illustrate their diminished coverage.The Number of Points Would Need to Increase Exponentially to Maintain a Given Accuracy.Now, although there are 100 (x1 , x2 ) points, neither axis has more accuracy than was provided by the previous 10, because there are 10 values of x1 required for the 10 different values of x2 necessary to represent the new probability plane. Because in practice the samples are taken at random it appears as though there are more samples, but this isn&apos;t the case in terms of probability space coverage which is related to the density of points, not the number of observations per axis.Next consider adding another dimension creating a probability space represented by a cube with ten units on a side, and 1000 probability units within. It now requires stacking 10 of the previous (x1 , x2 ) planes to construct the new (x1 , x2 , x3 ) cube, and of course 10 times the number of samples per axis to maintain the former level of accuracy, or probability coverage. It is obvious that10n samples would be required for a n-dimension problem.