n this demo heavy session we will see how developers can combine Azure’s custom cognitive services and IoT Edge technologies to productionise AI models to the edge on something as small as a Raspberry Pi. In the past, machine learning at the edge required powerful and expensive machines known as “heavy edge” but are limited by continuous power supplies and direct connectivity to all sensors, making deployments constrained and expensive. By leveraging the computing power of Azure and easy to use services we will see how this is now in the reach of any developer.
The session will cover:
· Training Custom Cognitive AI in Azure
· Deployment options for your shiny new AI
· Using IoT Edge to deploy AI
· Rubbing a little DevOps on it
11. 13
Prepare Data
Image Classification
Build & Train
Run
Model definition & training
Model Evaluation
Deploy the model - web service, Dockers Container or IoT EdgeScore the model
14. snow leopard?
Deep neural network Spark ML classifier
Decision tree or logistic regressionImage featuresImage
Class 1 Class 1
Gap
Computer vision and classification
35. Getting started with Custom Vision Service
This lab will show you how to bring advanced ML vision capabilities to your applications with the Custom Vision
Service. The service makes it easy to build custom image classifiers and provides APIs and tools to help you
improve your classifier over time.
https://github.com/Microsoft/ai-school-custom-vision-service-intro
Custom Vision + Azure IoT Edge on a Raspberry Pi 3
This is a sample showing how to deploy a Custom Vision model to a Raspberry Pi 3 device running Azure IoT
Edge. Custom Vision is an image classifier that is trained in the cloud with your own images. IoT Edge gives you
the possibility to run this model next to your cameras, where the video data is being generated.
https://github.com/Azure-Samples/Custom-vision-service-iot-edge-raspberry-pi
Drone Rescue
This sample demonstrates how to bring AI to edge devices, by using AirSim to generate synthetic training
data for a Custom Vision model and then how to deploy the model to edge devices.
https://github.com/Microsoft/DroneRescue
Before Deep Learning feature engineering used to be critical, as classic ML algorithms could not learn useful features by themselves. The way data was presented to the algorithm was crucial to its success.
Before Deep learning (and CNNs) became popular with image-classification problems, i.e. classification of handwritten digits (known as MNIST dataset) solutions were often based on hardcoded features such as histogram of pixel values, the height of each digit, number of loops (i.e. 0, 8, 9, 6 – has a loop, 7 – doesn’t).
Deep Learning removes the need for most feature engineering, as NN are capable of automatically extracting useful features from raw data.
But two reminders here for you:
Good features allow you to solve a problem with fewer resources (“reading a clock face” with CNN is possible but computationally ineffective)
Automatic learning of features requires a lot more data samples to learn from, hence DL models often relies on having lots of training data available.
Before we talk about neural networks, lets define what is a neuron.
Think of bottom boxes as layers of a neural network. Data comes in, processed, then hands off to the next node and so on. It analyzes every pixel. Image featurizer. A binary decision tree logistic regression is applied to the algorithm and determines an output of ‘yes’ or ‘no’. First step is can you understand what you’re looking at, second is yes or no it is. The neural network is the image featurizer. A decision tree or logistic regression. Image features
https://customvision.ai/
Provide UI & API Access
Load, train, version & test models
Publish
Export
A valid Microsoft account or Azure Active Directory OrgID
A series of images to train the classifier
A few images to test the classifier after training
Note: the lab will only use 10 images per classifier to minimize lab time
AirSim is a simulator for drones, cars and more built on Unreal Engine. It is open-source, cross platform and supports hardware-in-loop with popular flight controllers such as PX4 for physically and visually realistic simulations. It is developed as an Unreal plugin that can simply be dropped in to any Unreal environment you want.
Our goal is to develop AirSim as a platform for AI research to experiment with deep learning, computer vision and reinforcement learning algorithms for autonomous vehicles. For this purpose, AirSim also exposes APIs to retrieve data and control vehicles in a platform independent way.
AirSim is a simulator for drones, cars and more built on Unreal Engine. It is open-source, cross platform and supports hardware-in-loop with popular flight controllers such as PX4 for physically and visually realistic simulations. It is developed as an Unreal plugin that can simply be dropped in to any Unreal environment you want.
Our goal is to develop AirSim as a platform for AI research to experiment with deep learning, computer vision and reinforcement learning algorithms for autonomous vehicles. For this purpose, AirSim also exposes APIs to retrieve data and control vehicles in a platform independent way.
AirSim is a simulator for drones, cars and more built on Unreal Engine. It is open-source, cross platform and supports hardware-in-loop with popular flight controllers such as PX4 for physically and visually realistic simulations. It is developed as an Unreal plugin that can simply be dropped in to any Unreal environment you want.
Our goal is to develop AirSim as a platform for AI research to experiment with deep learning, computer vision and reinforcement learning algorithms for autonomous vehicles. For this purpose, AirSim also exposes APIs to retrieve data and control vehicles in a platform independent way.