"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
[AAAI2018] Multispectral Transfer Network: Unsupervised Depth Estimation for All-day Vision
1. Multispectral Transfer Network:
Unsupervised Depth Estimation for All-day Vision
AAAI 2018, New Orleans
Namil Kim*, Yukyung Choi*, Soonmin Hwang, In So Kweon
KAIST RCV Lab / All-day Vision Team
*Equal contributions
2. Problem definition
Why we are interesting in depth?
“Crucial information” to understand the world around us
*From NVidia
It is necessary to 3D understanding for self-decision making
3. Problem definition
How do we usually get “dense depth”
in any time of the day?
RGB-Stereo 3D LiDAR
DayNight
≤ 11.45m≥ 23.89m
4 points
2 points
LiDAR
0.16°
Sensitive Sparse
6. Idea to all-day depth estimation
Day Night
Illumination change
RGB
O X
Unsupervised
Learning
Unsupervised
Learning
7. Idea to all-day depth estimation
Day Night
Illumination change
RGBThermal
O X
Robust to illumination change
Unsupervised
Learning
Unsupervised
Learning
8. Idea to all-day depth estimation
Day Night
Illumination change
RGBThermal
Alignment
O X
Thermal-to-depth
#1
#2
Unsupervised
Learning
Unsupervised
Learning
9. Idea to all-day depth estimation
Day Night
Illumination change
RGBThermal
Alignment
O X
Thermal-to-depth
Adaptation
Robust to illumination change
Unsupervised
Learning
Unsupervised
Learning
10. Requirements #1
Multispectral (RGB-Thermal) dataset
RGB stereo pair
Alignment between thermal and RGB(left)
3D measurement
Yukyung Choi et al., KAIST Multispectral Recognition Dataset in Day and Night, TITS’18
11. Requirements #2
Multispectral (RGB-Thermal) Transfer Network
Aim: Thermal to depth prediction
Data: Thermal and aligned left RGB
(+ right RGB, stereo pair)
Model: unsupervised method
RGBThermal
Alignment
O
U.S.L
Thermal-to-depth
12. Proposed framework
What is Multispectral Transfer Network?
@Supervised method @Unsupervised method
@MTN method
14. Key Ideas of Proposed MTN (Overview)
1) Efficient Multi-task Learning
Predicting Depth, Surface Normals and Semantic Labels
with a Common Multi-Scale Convolutional Architecture,
ICCV2015.
Without annotated data:
Propose an efficient multi-task methodology
Depth and Chromaticity
- surface normal
- semantic labeling
- object pose annotation
* Most of works under an indoor.
(difficulty of collecting sources of
subsequent task in outdoor)
Multi-task learning for
depth estimation
No human-intensive data
Relevance to the depth
Contextual information
15. Key Ideas of Proposed MTN (1/4)
Predicting Depth, Surface Normals and Semantic Labels
with a Common Multi-Scale Convolutional Architecture,
ICCV2015.
- surface normal
- semantic labeling
- object pose annotation
* Most of works under an indoor.
(difficulty of collecting sources of
subsequent task in outdoor)
Previous works:
No human-intensive data
Relevance to the depth
Contextual information
Our work: Chromaticity
1) Efficient Multi-task Learning
Without annotated data:
Propose an efficient multi-task methodology
16. Key Ideas of Proposed MTN (2/4)
Interleaver Module:
to directly interleave the chromaticity into the depth estimation
“Skip-connection meets Inter-leaver for the feature learning”
Encoder Decoder
Multispectral Transfer Network (MTN)
2) Novel Module for Multi-task learning
Thermal Input
Disparity Output
Chromaticity Output
Conv.
DeConv.
Interleaver
Skip Connect.
Forward flow
17. Key Ideas of Proposed MTN (2/4)
2) Novel Module for Multi-task learning
1. Global/Un-Pooling + L2 Norm.
Enlarge receptive field [ParseNet] + feature transformation
2. Gating mechanism
Control the degree of the effectiveness of another task
to the main task. (especially in back-propagation).
3. Up-sampling and adding to previous output
Equipped in every skip-connected flows
(fully-connections between layers)
18. Key Ideas of Proposed MTN (2/4)
2) Novel Module for Multi-task learning
Do not have to find an optimal split point or
parameters. <c.f.,(b), (c), (d)>
Reduce adverse effects from inbuilt sharing
mechanism. <c.f.,(a), (b)>
Optimize the same strategy as the general multi-task
learning in end-to-end manner. <c.f., (d)>
In the inference, the Interleaver unit can be
removed. <c.f., (d)>
(a) Fully Shared Architecture
(c) No shared Architecture (d) Connected Architecture
(b) Partial Split Architectures
Previous Multi-task Learning Our Multi-task Learning
19. Key Ideas of Proposed MTN (3/4)
3) Photometric Correction
“Thermal Crossover”
Thermal-infrared image is not directly affected by changing lighting conditions.
However, thermal-infrared image suffers indirectly from cyclic illumination.
20. Key Ideas of Proposed MTN (4/4)
Propose the adaptive scaled sigmoid to stably train the
model as the bilinear activation function.
From the initial smaller maximum disparity 𝛽0,
we iteratively increase the value 𝛼 at each epoch
to cover the large disparity level in end of training.
According to the derivative,
this is not stable for large quantities in initial stages
4) Adaptive scaled sigmoid function
25. Conclusion
𝑰𝒏𝒕𝒆𝒓𝒍𝒆𝒂𝒗𝒆𝒓
in every skip-connected layer.
1. Pooling mechanism + L2 Norm.
(enlarge receptive field)
2. Gated Unit via Convolution
3. Up-sampling
Employ multi-task learning for depth estimation
Novel architecture for multi-task learning: Interleaver
Photometric correction is helpful to deal with a thermal image.
Adaptive sigmoid function help stable converge.