2. WHAT IS DEEP LEARNING?
First conceived in the 1950s, it is a class or subset
of machine learning algorithms that learns by using
a large, many-layered collection of connected
processes and exposing these processors to a vast
set of examples.
One of its primary attributes is the ability to
identify patterns in unstructured data.
3. WHY A SUDDEN RESURGENCE?
Advanced algorithms are developing as a result of
rapid improvements in:
• Fast information storage capacity,
• High computing power
• Parallelization
4. USE CASES & APPLICATIONS
Use Cases: Computer vision, voice recognition, and
natural language processing (NLP).
Business Applications: Text-based searches, fraud
detection, handwriting recognition, image search, and
translation.
Problems lending themselves to DL include medical
diagnosis, demand prediction, malware detection, self-
driving cars, customer churn, and failure prediction
5. USE CASES & APPLICATIONS
Use Cases: Computer vision, voice recognition, and
natural language processing (NLP).
Business Applications: Text-based searches, fraud
detection, handwriting recognition, image search, and
translation.
Problems lending themselves to DL include medical
diagnosis, demand prediction, malware detection, self-
driving cars, customer churn, and failure prediction
6. SHORTCOMING
Can be expensive and tricky to set up: requirement of a
large amount of data to train neural networks.
Still a very immature market, and most organizations
lack the necessary data science skills for even simple
machine learning solutions.
Not clear upfront if deep learning will solve a given
problem at all – there is simply no mathematical theory
available that indicates if a "good enough" deep learning
7. SHORTCOMING
Can be expensive and tricky to set up: requirement of a
large amount of data to train neural networks.
Still a very immature market, and most organizations
lack the necessary data science skills for even simple
machine learning solutions.
Not clear upfront if deep learning will solve a given
problem at all – there is simply no mathematical theory
available that indicates if a "good enough" deep learning
8. ML VS DL
DL model: Able to learn on its own,
ML model: Needs to be told how it should make an
accurate prediction (by feeding it more data).
Conceptually, DL is like ML but different because it can work
directly on digital representations of data
DL potentially limit human biases that go into choosing
inputs, but also find more meaningful measures than the
input ML relies on
9. ALGORITHMS
Deep neural networks (DNNs): The dominant deep
learning algorithms, which are neural networks
constructed from many layers ("deep") of alternating
linear & nonlinear processing units
Random Decision Forests (RDFs): Also constructed
from many layers, but instead of neurons the RDF is
constructed from decision trees & outputs a statistical
average of the individual trees.
11. NEURAL NETS
First conceived in the
1950, although many of
the key algorithmic
advances occurred in the
1980s and 1990s.
12. BOLTZMANN MACHINE
Terry Sejnowski developed
the basic algorithms called a
Boltzmann machine in the
early 1980s, which is a
network of symmetrically
connected, neuron-like
units that make stochastic
decisions about whether to
be on or off.
13. TERM “DEEP LEARNING”
Started gaining acceptance after a
publication by U. of Toronto
professor Geoffrey Hinton & grad
student Ruslan Salakhutdinov.
In 2006 they showed that neural
nets could be adequately pre-
trained one layer at a time,
accelerate consecutive supervised
learning, which would then fine-
tune the outcome
16. HARDWARE
NVIDIA: Kepler GPUs powering Microsoft & Amazon's cloud,
Jetson TK-x & DGX-1
Microsoft, July 2017: Chip created for HoloLens that
includes a module custom-designed to efficiently run deep
learning software
Google, May 2016: Using its own tailor-made chips called
Tensor Processing Units (TPUs)
FPGAs (field-programmable gate arrays): Ability to
provide a higher performance per watt of power
consumption vs GPUs
17. BIG DEMAND
According to Microsoft CVP Peter Lee, there’s a “bloody war
for talent in this space.”
Given their size, Google, Facebook, Microsoft, and NVIDIA
can afford to hire the most accomplished deep learning
talent and pay them handsomely.
Deep learning represented almost half of all enterprise AI
revenue in 2016, according to Tractica,