The document discusses several demos of hardware accelerated machine learning inference:
- A heavy edge demo showing hardware accelerated inferencing on the edge using a Minnow Board.
- A drone app creation demo.
- A vision AI developer kit demo.
A Journey Into the Emotions of Software Developers
AI + IoT: building, deploying, and managing you custom AI on the edge
1.
2.
3. • Heavy Edge Demo – HW accelerated inferencing on the edge
• Minnow Board Demo
• Drone app creation Demo
• Vision AI developer kit demo
4. What engine(s) do you want
to use?
Deployment target
Which experience do you
want?
Build your own or consume pre-
trained models?
Microsoft ML
& AI products
Build your
own
Azure Machine Learning
Code first
(On-prem)
ML Server
On-prem
Hadoop
SQL
Server
(cloud)
AML services (Preview)
SQL
Server
Spark Hadoop Azure
Batch
DSVM Azure
Container
Service
Visual tooling
(cloud)
AML Studio
Consume
Cognitive services, bots
IoT / Edge
AML
Mobile
AML
5. Data Sets &
Catalog
Data Exploration &
Preparation
Model
Development &
Training
Model Package &
Deploy
App Development
& Model Inference
App Flighting &
Analytics
Data Collection
Data Curation
Data Engineer
AI App Dev
Data Scientist
Data Load
Dev Ops
Model Manager
*
6. Ease of Discovery: Find the right model for your needs
Ease of Integration: Use the tools you are familiar with
to create AI apps and manage them
Ease of deployment: Deploy the model easily
Reach of Deployment: Deploy the model anywhere
Single pane of glass for monitoring and
management: All your company models in one place
7. Cloud: Azure Heavy Edge Light Edge
Description
An Azure host that
spans from CPU to GPU
and FPGA VMs
A server with slots to insert CPUs, GPUs, and FPGAs or a X64 or
ARM system that needs to be plugged in to work
A Sensor with a SOC
(ARM CPU, DSPs) and
memory that can
operate on batteries
What runs
model
CPU,GPU or Arria 10
FPGA
Arria 10
FPGA
Nvidia GPU x64 CPU ARM CPU
Hw accelerated
DSP,CPU,GPU
Model package
Native to Windows
and container elsewhere
Windows
Native
- Linux container
- Windows ML
- Linux container
- Windows ML
- Linux
container
- (Ideally) container
- Android Native
- IoS Native
- RT OS
18. Certify support for the target
platform
• Qualcomm: Certify Board QCS 605/603 : Linux
• Board targeted to IoT devices
Incorporate as a first class deployment
target
• Qualcomm SNPE quantized models
Use the same development pipeline
that we use for all IoT Devices
Enable an ecosystem of devices that
use the board