1. tiny-dnn is a header-only deep learning framework for C++ that aims to be easy to introduce, have simple syntax, and support extensible backends.
2. It allows defining neural networks concisely using modern C++ features and supports common network types like MLPs and CNNs through simple syntax similar to Keras and TensorFlow.
3. The framework has optional performance-oriented backends like AVX and NNPACK to accelerate computation on different hardware, and supports functions for model serialization, basic training, and more through additional modules.
Deep learning with C++ - an introduction to tiny-dnn
1. deep learning with c++
an introduction to tiny-dnn
by Taiga Nomi
embedded software engineer, Osaka, Japan
2. deep learning
Icons made by Freepik from www.flaticon.com is licensed by CC 3.0 BY
Facial recognition
Image understanding
Finance
Game playing
Translation
Robotics
Drug discovery
Text recognition
Video processing
Text generation
3. Deep learning
- Learning a complicated function from many data
- Composed of trainable, simple mathematical functions
Input OutputTrainable Building
Blocks
- text
- audio
- image
- video
- ...
- text
- audio
- image
- video
- ...
8. “A Modern Deep Learning module” by Edgar Riba
“Deep Learning with Quantization for Semantic Saliency Detection” by Yida Wang
https://summerofcode.withgoogle.com/archive/
10. 1.Easy to introduce
- Just put the following line into your cpp
tiny-dnn is header only - No installation
tiny-dnn is dependency-free - No prerequisites
#include <tiny_dnn/tiny_dnn.h>
11. 1.Easy to introduce
- You can bring Deep Learning to any target you have a C++ compiler
- Officially supported (by CI builds):
- Windows (msvc2013 32/64bit, msvc2015 32/64bit)
- Linux (gcc4.9, clang3.5)
- OSX(LLVM 7.3)
- tiny-dnn might run on other compiler that support C++11
12. 1.Easy to introduce
- Caffe model converter is also available
- TensorFlow converter - coming soon!
- Close the gap between researcher and engineer
26. 3.Extensible backends
Common scenario1:
“We have a good GPU machine to train networks, but
we need to deploy trained model into mobile device”
Common scenario2:
“We need to write platform-specific code to get
production-level performance... but it’s painful to
understand whole framework”
27. 3.Extensible backends
Some performance critical layers have backend engine
Layer API
backend::internal
pure-c++ code
backend::avx
avx-optimized code …
backend::nnpack
x86/ARM
backend::opencl
GPU
Optional
28. 3.Extensible backends
// select an engine explicitly
net << conv<>(28, 28, 5, 1, 32, backend::avx)
<< ...;
// switch them seamlessly
net[0]->set_backend_type(backend::opencl);
29. Model serialization (binary/json)
Regression training
Basic image processing
Layer freezing
Graph visualization
Multi-thread execution
Double precision support
Basic functionality
30. Caffe importer (requires protobuf)
OpenMP support
Intel TBB support
NNPACK backend (same to caffe2)
libdnn backend (same to caffe-opencl)Extra modules
(requires 3rd-party libraries)
32. - GPU integration
- GPU backend is still experimental
- cudnn backend
- More mobile-oriented
- iOS/Android examples
- Quantized operation for less RAM
- TensorFlow Importer
- Performance profiling tools
- OpenVX support
We need your help!
33. User chat for QA:
https://gitter.im/tiny-dnn
Official documents:
http://tiny-dnn.readthedocs.io/en/latest/
For users
34. Join our developer chat:
https://gitter.im/tiny-dnn/developers
or
Check out docs, and our issues marked as “contributions welcome”:
https://github.com/tiny-dnn/tiny-dnn/blob/master/docs/developer_gui
des/How-to-contribute.md
https://github.com/tiny-dnn/tiny-dnn/labels/contributions%20welcome
For developers