I presented it during Dockercon. This talk was all about AI + Docker + IoT. Showcased how Docker app talk to Sensors, GPUs and Camera module and demo'ed how sensors data can be visualized over Grafana dashboard - all running on a IoT Edge device.
2. - Docker Captain
- ARM Innovator
- Author @ collabnix.com
- Docker Community Leader
- DevRel at Redis Labs
- Worked in Dell, VMware & CGI
About Me
Ajeet Singh Raina
3. - The Rise of Docker for AI
- Autonomous Robotic Platform
- Docker on IoT Edge
- IoT Edge Sensor Analytics
- Real-time video analytics
- Real-time Crowd Mask detection
Agenda
4.
5. Around 94% of AI Adopters are using or plan to
use containers within 1 year time.
Source: 451 Research
6. A Food Delivery Robot
- An autonomous robot system
- Camera
- Sensors
- GPS
- NVIDIA Jetson TX2
A Food Delivery Robot
- An autonomous robot system
- Camera
- Sensors
- GPS
- NVIDIA Jetson board
8. But how shall I build app for such
robotic platform in a faster pace?
9.
10.
11. Docker on NVIDIA Jetson Nano
Build on Open Source
● Install the latest version of Docker
curl https://get.docker.com | sh
&& sudo systemctl --now enable docker
● Setting NVIDIA Container Toolkit
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
&& curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
&& curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee
/etc/apt/sources.list.d/nvidia-docker.list
● Install nvidia-docker2 package
$ sudo apt-get install -y nvidia-docker2
$ sudo systemctl restart docker
● Running Ubuntu ARM container
docker run -it arm64v8/ubuntu /bin/bash
12. Docker access to NVIDIA GPU
Build on Open Source
● Pre-requisite
$ apt-get install nvidia-container-runtime
● Expose GPU for use
$ docker run -it --rm --gpus all ubuntu nvidia-smi
● Specify the GPUs
$ docker run -it --rm --gpus
device=GPU-3a23c669-1f69-c64e-cf85-44e9b07e7a2a ubuntu
nvidia-smi
● Set NVIDIA capabilities
$ docker run --gpus 'all,capabilities=utility' --rm
ubuntu nvidia-smi
13. Enabling GPU access with Compose
Build on Open Source
● Compose services can define GPU device reservations
services:
test:
image: nvidia/cuda:10.2-base
command: nvidia-smi
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu, utility]
ia-smi
● Specify the GPUs
$ docker-compose up
Creating network "gpu_default" with the default driver
Creating gpu_test_1 ... done
Attaching to gpu_test_1
test_1 |
+-----------------------------------------------------
------------------------+
test_1 | | NVIDIA-SMI 450.80.02 Driver Version:
450.80.02 CUDA Version: 11.1 |
test_1 |
|-------------------------------+---------------------
-+----------------------+
======================================================
==============|