2. Full Day of Applied AI
Morning
Session 1 Intro to Artificial Intelligence
09:00-09:45 Introduction to Applied AI
09:45-10:00 Coffee and break
Session 2 Live Coding a machine learning app
10:00-10:10 Getting your machine ready for machine learning
10:10-10.20 Training and evaluating the model
10.20-10.50 Improving the model
10.50-11.00 Coffee and break
Session 3 Machine learning in the wild - deployment
11:00-11.15 Coding exercise continued
11:15-11:45 Serving your own machine learning model | Code
11:45-11:55 How to solve problems | interactive exercise
11:55-12:00 Q and A
Lunch
12:00-13:00 Lunch
Afternoon
Session 4 Hello World Deep Learning (MNIST)
13:00-13:15 Deep Learning intro
13:00-13.15 Image recognition and CNNs | Talk |
13:15-13:45 Building your own convolutional neural network | Code |
13:45-14:00 Coffee and break
Session 5 Natural Language Processing
14:00-14.30 Natural language processing | Talk |
14:30-14:45 Working with language | Code |
14:45-15:00 Coffee and break
Session 6 Conversational interfaces and Time Series
14:00-14.20 Conversational interfaces
14:20-14:45 Time Series prediction
14:45-15:00 Coffee and break
Session 7 Generative models and style transfer
16:00-16.30 Generative models | Talk |
16:30-16:45 Trying out GANS and style transfer | Code |
16:45-17:00 Coffee and break
Anton Osika AI Research Engineer Sana Labs AB
anton.osika@gmail.com
Birger Moëll Machine Learning Engineer
birger.moell@gmail.com
3.
4. Deep learning for creativity
Generative models gives
computers the ability to
create new forms of data.
5. General Adversarial Networks
“Generative Adversarial Networks is the
most interesting idea in the last ten years
in machine learning.”
-Yann LeCun, director of Facebook AI
6. Generator and discriminator networks
GANs — originally proposed by Ian Goodfellow —
have two networks, a generator and a discriminator.
They are both trained at the same time and compete
again each other in a minimax game. The generator is
trained to fool the discriminator creating realistic
images, and the discriminator is trained not to be
fooled by the generator.
7. Training a generative model
Generator and discriminator
These two networks are therefore locked in a battle: the
discriminator is trying to distinguish real images from fake images
and the generator is trying to create images that make the
discriminator think they are real. In the end, the generator network
is outputting images that are indistinguishable from real images for
the discriminator.
GAN learning to generate images (linear time)
8. General adversarial networks and neuroscience
The default mode network (responsible for your
ideas of self, others, future and past) is
anticorrelated with the task positive network
(responsible for actions).
The default mode network is what activates when
you get bored or aren’t performing a task.
The default mode networks input is your actions in
the world (the task positive network) that
influences the output of its computations. These
computations (your thoughts) then govern actions
and becomes the input for the task positive
network.
A feedback loop of a generator (task positive
network) and discriminator (default mode
network) is a fairly accurate description of how
your brain functions.
9. General adversarial networks in practice
General adversarial networks are cool and have inspired cool things.
Fig. 1. Deep neural network for inferring facial animation from speech. The network takes approximately half a second of audio as input,
and outputs the 3D vertex positions of a fixed-topology mesh that correspond to the center of the audio window. The network also takes
a secondary input that describes the emotional state. Emotional states are learned from the training data without any form of pre-labeling.
Nvidia 2017, https://blogs.nvidia.com/blog/2017/07/31/nvidia-research-brings-ai-to-computer-graphics/
11. Neural style transfer
Neural style transfer is based on a 2015
paper https://arxiv.org/pdf/1705.04058.pdf
Abstract: In fine art, especially painting,
humans have mastered the skill to create
unique visual experiences through
composing a complex interplay between
the content and style of an image. Here we
introduce an artificial system based on a
Deep Neural Network that creates artistic
images of high perceptual quality.
We determine the content and the style of
images (content/style extractor) and then
merge content from one image with style
from another image.
12. Neural style transfer
Neural style transfer is a
technique that synthesize a
third image that has the
semantic content of the first
image and the texture/style of
the second.
14. Adaptive style transfer
Next level style transfer!
Instead of using style
transfer on a single
image.
We combine the style of
a large collections of
images from an author.
https://github.com/Comp
Vis/adaptive-style-
transfer
22. Focus on engineering or research
Machine learning is a huge field that is growing quickly.
Decide what to focus on.
Applied machine learning, serving deep learning models in the real world, is an
area which will be really important in the future.
Research in machine learning is also worthwhile since the most important
algorithms haven’t been found yet.
23. Thank you for your time.
Good luck exploring deep learning.
Feel free to fill out our survey with any feedback
birger.moell@gmail.com
anton.osika@gmail.com
24. Recommended reading list
Applying Machine Learning
People + AI Guidebook - Google - build great products with ML
http://martin.zinkevich.org/rules_of_ml/rules_of_ml.pdf - best practices when productionizing ML
Machine Learning Yearning - Andrew Ng - applied ML research strategy
Cloud provider APIs (e.g. AWS)
Advanced modelling
Deep Learning for Coders - fast.ai course
100 page ML book