SlideShare a Scribd company logo
Fisheye/Omnidirectional View in
Autonomous Driving III
YuHuang
Yu.huang07@gmail.com
Sunnyvale,California
Outline
• DS-PASS: Detail-Sensitive Panoramic Annular Semantic Segmentation
through SwaftNet for Surrounding Sensing
• The OmniScape Dataset (ICRA’2020)
• Universal Semantic Segmentation for Fisheye Urban Driving Images
• Vehicle Re-ID for Surround-view Camera System
• SynDistNet: Self-Supervised Monocular Fisheye Camera Distance
Estimation Synergized with Semantic Segmentation for Autonomous
Driving
• Towards Autonomous Driving: a Multi-Modal 360 Perception Proposal
DS-PASS: Detail-Sensitive Panoramic Annular Semantic
Segmentation through SwaftNet for Surrounding Sensing
• In this paper, propose a network adaptation framework to achieve Panoramic
Annular Semantic Segmentation (PASS), which allows to re-use conventional
pinhole-view image datasets, enabling modern segmentation networks to
comfortably adapt to panoramic images.
• Specifically, adapt our proposed SwaftNet to enhance the sensitivity to details by
implementing attention-based latera connections between the detail-critical
encoder layers and the context-critical decoder layers. It benchmarks the
performance of efficient segmenters on panoramic segmentation with an
extended PASS dataset, demonstrating that the proposed realtime SwaftNet
outperforms state-of-the-art efficient networks.
• Furthermore, assess real-world performance when deploying the Detail-Sensitive
PASS (DS-PASS) system on a mobile robot and an instrumented vehicle, as well as
the benefit of panoramic semantics for visual odometry, showing the robustness
and potential to support diverse navigational applications.
DS-PASS: Detail-Sensitive Panoramic Annular Semantic
Segmentation through SwaftNet for Surrounding Sensing
Panoramic annular semantic segmentation. On the left: raw annular image; First row on the right:
unfolded panorama; Second row: panoramic segmentation of the baseline method, the
classification heatmap of pedestrian is blurry; Third row: detail-sensitive panoramic segmentation
of the proposed method, the heatmap and semantic map are detail-preserved.
DS-PASS: Detail-Sensitive Panoramic Annular Semantic
Segmentation through SwaftNet for Surrounding Sensing
The proposed framework for panoramic
annular semantic segmentation. Each
feature model (corresponding to the single
feature model like encoder in conventional
architectures) is responsible for predicting
the semantically-meaningful high-level
feature map of a panorama segment while
interacting with the neighboring ones
through cross-segment padding (indicated
by the dotted arrows). Fusion model
incorporates the feature maps and
completes the panoramic segmentation.
The proposed architecture follows the single-
scale model of SwiftNet, based on an U-
shape structure like Unet and LinkNet.
DS-PASS: Detail-Sensitive Panoramic Annular Semantic
Segmentation through SwaftNet for Surrounding Sensing
The proposed architecture with attention-based lateral connections to blend semantically-
rich deep layers with spatially-detailed shallow layers. The down-sampling path with the SPP
module (encoder) corresponds to the feature model in last figure, while the up-sampling path
(decoder) corresponds to the fusion model
DS-PASS: Detail-Sensitive Panoramic Annular Semantic
Segmentation through SwaftNet for Surrounding Sensing
The OmniScape Dataset
• Despite the utility and benefits of omnidirectional images in robotics and automotive applications,
there are no datasets of omnidirectional images available with semantic segmentation, depth map,
and dynamic properties.
• This is due to the time cost and human effort required to annotate ground truth images.
• This paper presents a framework for generating omnidirectional images using images that are
acquired from a virtual environment.
• For this purpose, it demonstrates the relevance of the proposed framework on two well-known
simulators: CARLA Simulator, which is an open-source simulator for autonomous driving research, and
Grand Theft Auto V(GTA V), which is a very high quality video game.
• It explains in details the generated OmniScape dataset, which includes stereo fisheye and catadioptric
images acquired from the two front sides of a motorcycle, including semantic segmentation, depth
map, intrinsic parameters of the cameras and the dynamic parameters of the motorcycle.
• It is worth noting that the case of two-wheeled vehicles is more challenging than cars due to the
specific dynamic of these vehicles.
The OmniScape Dataset
Recording platform and a representation of the different modalities
The OmniScape Dataset
Lookup table construction to set the omnidirectional image pixel values
The OmniScape Dataset
The omnidirectional camera model
The OmniScape Dataset
Universal Semantic Segmentation for Fisheye
Urban Driving Images
• When performing semantic image segmentation, a wider field of view (FoV) helps to
obtain more information about the surrounding environment, making automatic driving
safer and more reliable, which could be offered by fisheye cameras.
• However, large public fisheye datasets are not available, and the fisheye images captured
by the fisheye camera with large FoV comes with large distortion, so commonly-used
semantic segmentation model cannot be directly utilized.
• In this paper, a 7 DoF augmentation method is proposed to transform rectilinear image
to fisheye image in a more comprehensive way.
• In training, rectilinear images are transformed into fisheye images in 7 DoF, which
simulates the fisheye images from different positions, orientations and focal lengths.
• The result shows that training with the seven-DoF augmentation can improve the models
accuracy and robustness against different distorted fisheye data.
• This seven-DoF augmentation provides a universal semantic segmentation solution for
fisheye cameras in different autonomous driving applications.
• The code and configurations are released at https://github.com/Yaozhuwa/FisheyeSeg.
Universal Semantic Segmentation for Fisheye
Urban Driving Images
Projection model of fisheye camera. PW is a
point on a rectilinear image that we place on
the x-y plane of the world coordinate system.
Ɵ is the Angle of incidence of the point
relative to the fisheye camera. P is the
imaging point of PW on the fisheye image.
|OP| = fƟ. The relative rotation and
translation between the world coordinate
system and the camera coordinate system
results in six degrees of freedom.
Universal Semantic Segmentation for Fisheye
Urban Driving Images
The six DoF augmentation.
Except the first row, every
image is transformed using
a virtual fisheye camera
with focal length of 300
pixels. The letter in brackets
means that which axis the
camera is panning along or
rotating around.
Universal Semantic Segmentation for Fisheye
Urban Driving Images
the synthetic fisheye images with different f(focal length)
Universal Semantic Segmentation for Fisheye
Urban Driving Images
1. Base Aug: random clipping + random flip + color
jitter + z-aug of fixed focal length
2. RandF Aug: Base Aug + random focal length
3. RandR Aug: Base Aug + random rotation
4. RandT Aug: Base Aug + random translation
5. RandFR Aug: Base Aug + random focal length +
random rotation
6. RandFT Aug: Base Aug + random focal length +
random translation
7. Six-DoF Aug: Base Aug + random rotation +
random translation
8. Seven-DoF Aug: Base Aug + random focal length
+ random rotation + random translation
Seven-DoF Augmentation
Vehicle Re-ID for Surround-view Camera System
• The vehicle re-identification (Re-ID) plays a critical role in the perception system of
autonomous driving, which attracts more and more attention in recent years.
• However, there is no existing complete solution for the surround-view system mounted
on the vehicle.
• Two main challenges in above scenario: i) In single-camera view, it is difficult to recognize
the same vehicle from the past image frames due to the fish-eye distortion, occlusion,
truncation, etc. ii) In multi-camera view, the appearance of the same vehicle varies
greatly from different cameras viewpoints.
• Thus, an integral vehicle Re-ID solution to address these problems.
• Specifically, a quality evaluation mechanism to balance the effect of tracking boxes drift
and targets consistence.
• Besides, take advantage of the Re-ID network based on attention mechanism, then
combined with a spatial constraint strategy to further boost the performance between
different cameras.
• It will release the code and annotated fisheye dataset for the benefit of community.
Vehicle Re-ID for Surround-view Camera System
360 surround-view camera system. Each
arrow points to an image captured by the
corresponding camera.
Vehicle Re-ID for Surround-view Camera System
Vehicles in single view of fisheye camera. (a) The same vehicle features change dramatically in
consecutive frames and vehicles tend to obscure each other. (b) Matching errors are caused
by tracking results. (c) The vehicle center indicated by the orange box is stable while the IoU in
consecutive frames indicated by the yellow box decreases with movement.
Vehicle Re-ID for Surround-view Camera System
The overall framework of vehicle Re-ID in single camera. Each object is assigned
a single tracker to realize Re-ID in single channel. Tracking templates are
initialized with object detection results. All tracking outputs are post-processed by
the quality evaluation module to deal with the distorted or occluded objects.
Vehicle Re-ID for Surround-view Camera System
The overall framework of the vehicle Re-ID in multi-camera. For the new target, Re-ID model is used first
to extract the features, followed by the distance metrics is carried out for this feature and features in
gallery. Besides, the spatial constraint strategy is adopted to improve the correlation effect.
Vehicle Re-ID for Surround-view Camera System
Samples captured by different cameras. (a) The appearances of the same vehicle
captured by different cameras vary greatly, and the same color represents the same
object. (b) Objects have a similar appearance may appear in the same camera
view, as shown by these two black vehicles in green boxes.
Vehicle Re-ID for Surround-view Camera System
Illustration of the multi-camera Re-ID
network. This network is a 2 branch
parallel structure. The top branch is
employed to make the network pay
more attention on object regions, and
anther is for extracting global
features.
Vehicle Re-ID for Surround-view Camera System
Projection uncertainty of key points. Ellipse 1 and ellipse 2 are
uncertainty ranges of front and left (right) cameras, respectively.
Vehicle Re-ID for Surround-view Camera System
SynDistNet: Self-Supervised Monocular Fisheye Camera Distance Estimation
Synergized with Semantic Segmentation for Autonomous Driving
• In this paper, introduce a novel multi-task learning strategy to improve self
supervised monocular distance estimation on fisheye and pinhole camera images.
• The contribution to this work is threefold:
• Firstly, we introduce a novel distance estimation network architecture using a self-attention
based encoder coupled with robust semantic feature guidance to the decoder that can be
trained in a one-stage fashion.
• Secondly, we integrate a generalized robust loss function, which improves performance
significantly while removing the need for hyperparameter tuning with the reprojection loss.
• Finally, we reduce the artifacts caused by dynamic objects violating static world assumption
by using a semantic masking strategy.
• As there is limited work on fisheye cameras, it is evaluated on KITTI using a
pinhole model.
• It achieved state-of-the-art performance among self-supervised methods without
requiring an external scale estimation.
SynDistNet: Self-Supervised Monocular Fisheye Camera Distance Estimation
Synergized with Semantic Segmentation for Autonomous Driving
Overview over the joint prediction of distance
^Dt and semantic segmentation Mt from a
single input image It. Compared to previous
approaches, the semantically guided
distance estimation produces sharper depth
edges and reasonable distance estimates for
dynamic objects.
SynDistNet: Self-Supervised Monocular Fisheye Camera Distance Estimation
Synergized with Semantic Segmentation for Autonomous Driving
• The self-supervised depth and distance estimation is developed within a self-
supervised monocular structure-from-motion (SfM) framework which requires two
networks aiming at learning:
• 1. a monocular depth/distance model gD : It -> ^Dt predicting a scale-ambiguous
depth or distance (the equivalent of depth for general image geometries) ^Dt =
gD(It(ij)) per pixel ij in the target image It;
• 2. an ego-motion predictor gT : (It; It’ ) -> Tt->t0 predicting a set of 6 degrees of
freedom which implement a rigid transformation Tt->t’ ∊ SE(3), between the target
image It and the set of reference images It’. Typically, t’ ∊ {t + 1; t – 1}, i.e. the frames
It-1 and It+1 are used as reference images, although using a larger window is possible.
SynDistNet: Self-Supervised Monocular Fisheye Camera Distance Estimation
Synergized with Semantic Segmentation for Autonomous Driving
Overview of proposed
framework for the joint
prediction of distance and
semantic segmentation. The
upper part (blue blocks)
describes the single steps for
the depth estimation, while the
green blocks describe the
single steps needed for the
prediction of the semantic
segmentation.
SynDistNet: Self-Supervised Monocular Fisheye Camera Distance Estimation
Synergized with Semantic Segmentation for Autonomous Driving
Visualization of proposed network architecture
to semantically guide the depth estimation.
They utilize a self-attention-based encoder
and a semantically guided decoder using
pixel-adaptive convolutions.
SynDistNet: Self-Supervised Monocular Fisheye Camera Distance Estimation
Synergized with Semantic Segmentation for Autonomous Driving
Quantitative performance comparison of network with other self-supervised monocular methods for depths
up to 80m for KITTI. Original uses raw depth maps for evaluation, and Improved uses annotated depth
maps. At test-time, all methods excluding FisheyeDistanceNet, PackNet-SfM and this method, scale the
estimated depths using median ground-truth LiDAR depth.
SynDistNet: Self-Supervised Monocular Fisheye Camera Distance Estimation
Synergized with Semantic Segmentation for Autonomous Driving
Qualitative result comparison on the Fisheye WoodScape dataset between the baseline model without
contributions and the proposed SynDistNet. SynDistNet can recover the distance of dynamic objects (left
images) which eventually solves the infinite distance issue. In the 3rd and 4th columns, can see that
semantic guidance helps to recover the thin structure and resolve the distance of homogeneous areas
outputting sharp distance maps on raw fisheye images.
Towards Autonomous Driving: a Multi-Modal
360 Perception Proposal
• A multi-modal 360 framework for 3D object detection and tracking for
autonomous vehicles is presented.
• The process is divided into four main stages.
• First, images are fed into a CNN network to obtain instance segmentation of the
surrounding road participants.
• Second, LiDAR-to-image association is performed for the estimated mask proposals.
• Then, the isolated points of every object are processed by a PointNet ensemble to
compute their corresponding 3D bounding boxes and poses.
• A tracking stage based on Unscented Kalman Filter is used to track the agents along
time.
• The solution, based on a sensor fusion configuration, provides accurate and
reliable road environment detection.
• A wide variety of tests of the system, deployed in an autonomous vehicle,
have successfully assessed the suitability of the proposed perception stack
in a real autonomous driving application.
Towards Autonomous Driving: a Multi-Modal
360 Perception Proposal
• The following sensors are employed:
• Five CMOS cameras equipped with an 85-HFOV lens.
• A 32-layer LiDAR scanner featuring a minimum vertical resolution of 0.33 and a range of
200m (Velodyne Ultra Puck).
• Accurate synchronization and calibration between sensors are of paramount
importance.
• Hence, they all are synchronized with the clock provided by a GPS receiver, and
cameras are externally triggered at a 10 Hz rate.
• Regarding calibration, cameras’ intrinsic parameters are obtained through the
checkerboard-based approach by Zhang, and extrinsic parameters representing
the relative position between sensors are estimated through a monocular-ready
variant of the velo2cam method.
• The result of this automatic procedure is further validated by visual inspection.
Towards Autonomous Driving: a Multi-Modal
360 Perception Proposal
• The proposed solution is based on three pillars.
• First, visual data is employed to perform detection and instance level
semantic segmentation.
• Then, LiDAR points whose image projection falls within each obstacle
bounding polygon are employed to estimate its 3D pose.
• Finally, the tracking stage provides consistency, thus mitigating
occasional misdetections and enabling trajectory prediction.
• The combination of these three stages allows accurate and robust
identification of the dynamic agents surrounding the vehicle.
Towards Autonomous Driving: a Multi-Modal
360 Perception Proposal
System overview. Images from all the cameras are processed by individual instances of Mask R-
CNN, which provide detections endowed with a semantic mask. LiDAR points in these regions are
used as an input for several F-PointNets responsible for estimating a 3D bounding box and
estimate its position with respect to the car. Then, 3D detections from each camera are fused using
an NMS procedure. A subsequent tracking stage provides consistency across several frames and
avoids temporary misdetections.
Towards Autonomous Driving: a Multi-Modal
360 Perception Proposal
Qualitative results of the proposed system on some typical traffic scenarios. From top to bottom: 3D
detections in rear-left, front-left, front, front-right, and rear-right cameras, and Bird’s Eye View representation.
Fisheye-Omnidirectional View in Autonomous Driving III

More Related Content

What's hot

Jointly mapping, localization, perception, prediction and planning
Jointly mapping, localization, perception, prediction and planningJointly mapping, localization, perception, prediction and planning
Jointly mapping, localization, perception, prediction and planning
Yu Huang
 
Driving Behavior for ADAS and Autonomous Driving IX
Driving Behavior for ADAS and Autonomous Driving IXDriving Behavior for ADAS and Autonomous Driving IX
Driving Behavior for ADAS and Autonomous Driving IX
Yu Huang
 
Pedestrian Behavior/Intention Modeling for Autonomous Driving VI
Pedestrian Behavior/Intention Modeling for Autonomous Driving VIPedestrian Behavior/Intention Modeling for Autonomous Driving VI
Pedestrian Behavior/Intention Modeling for Autonomous Driving VI
Yu Huang
 
Pedestrian behavior/intention modeling for autonomous driving III
Pedestrian behavior/intention modeling for autonomous driving IIIPedestrian behavior/intention modeling for autonomous driving III
Pedestrian behavior/intention modeling for autonomous driving III
Yu Huang
 
Driving behaviors for adas and autonomous driving xiv
Driving behaviors for adas and autonomous driving xivDriving behaviors for adas and autonomous driving xiv
Driving behaviors for adas and autonomous driving xiv
Yu Huang
 
Camera-Based Road Lane Detection by Deep Learning II
Camera-Based Road Lane Detection by Deep Learning IICamera-Based Road Lane Detection by Deep Learning II
Camera-Based Road Lane Detection by Deep Learning II
Yu Huang
 
Pedestrian behavior/intention modeling for autonomous driving II
Pedestrian behavior/intention modeling for autonomous driving IIPedestrian behavior/intention modeling for autonomous driving II
Pedestrian behavior/intention modeling for autonomous driving II
Yu Huang
 
Driving Behavior for ADAS and Autonomous Driving VIII
Driving Behavior for ADAS and Autonomous Driving VIIIDriving Behavior for ADAS and Autonomous Driving VIII
Driving Behavior for ADAS and Autonomous Driving VIII
Yu Huang
 
Driving Behavior for ADAS and Autonomous Driving VII
Driving Behavior for ADAS and Autonomous Driving VIIDriving Behavior for ADAS and Autonomous Driving VII
Driving Behavior for ADAS and Autonomous Driving VII
Yu Huang
 
Depth Fusion from RGB and Depth Sensors IV
Depth Fusion from RGB and Depth Sensors  IVDepth Fusion from RGB and Depth Sensors  IV
Depth Fusion from RGB and Depth Sensors IV
Yu Huang
 
Deep Learning’s Application in Radar Signal Data II
Deep Learning’s Application in Radar Signal Data IIDeep Learning’s Application in Radar Signal Data II
Deep Learning’s Application in Radar Signal Data II
Yu Huang
 
Pedestrian behavior/intention modeling for autonomous driving V
Pedestrian behavior/intention modeling for autonomous driving VPedestrian behavior/intention modeling for autonomous driving V
Pedestrian behavior/intention modeling for autonomous driving V
Yu Huang
 
Driving Behavior for ADAS and Autonomous Driving VI
Driving Behavior for ADAS and Autonomous Driving VIDriving Behavior for ADAS and Autonomous Driving VI
Driving Behavior for ADAS and Autonomous Driving VI
Yu Huang
 
3-d interpretation from stereo images for autonomous driving
3-d interpretation from stereo images for autonomous driving3-d interpretation from stereo images for autonomous driving
3-d interpretation from stereo images for autonomous driving
Yu Huang
 
Deep Learning’s Application in Radar Signal Data
Deep Learning’s Application in Radar Signal DataDeep Learning’s Application in Radar Signal Data
Deep Learning’s Application in Radar Signal Data
Yu Huang
 
Stereo Matching by Deep Learning
Stereo Matching by Deep LearningStereo Matching by Deep Learning
Stereo Matching by Deep Learning
Yu Huang
 
Driving behaviors for adas and autonomous driving XI
Driving behaviors for adas and autonomous driving XIDriving behaviors for adas and autonomous driving XI
Driving behaviors for adas and autonomous driving XI
Yu Huang
 
3-d interpretation from single 2-d image IV
3-d interpretation from single 2-d image IV3-d interpretation from single 2-d image IV
3-d interpretation from single 2-d image IV
Yu Huang
 
Driving behaviors for adas and autonomous driving XIII
Driving behaviors for adas and autonomous driving XIIIDriving behaviors for adas and autonomous driving XIII
Driving behaviors for adas and autonomous driving XIII
Yu Huang
 
Deep VO and SLAM IV
Deep VO and SLAM IVDeep VO and SLAM IV
Deep VO and SLAM IV
Yu Huang
 

What's hot (20)

Jointly mapping, localization, perception, prediction and planning
Jointly mapping, localization, perception, prediction and planningJointly mapping, localization, perception, prediction and planning
Jointly mapping, localization, perception, prediction and planning
 
Driving Behavior for ADAS and Autonomous Driving IX
Driving Behavior for ADAS and Autonomous Driving IXDriving Behavior for ADAS and Autonomous Driving IX
Driving Behavior for ADAS and Autonomous Driving IX
 
Pedestrian Behavior/Intention Modeling for Autonomous Driving VI
Pedestrian Behavior/Intention Modeling for Autonomous Driving VIPedestrian Behavior/Intention Modeling for Autonomous Driving VI
Pedestrian Behavior/Intention Modeling for Autonomous Driving VI
 
Pedestrian behavior/intention modeling for autonomous driving III
Pedestrian behavior/intention modeling for autonomous driving IIIPedestrian behavior/intention modeling for autonomous driving III
Pedestrian behavior/intention modeling for autonomous driving III
 
Driving behaviors for adas and autonomous driving xiv
Driving behaviors for adas and autonomous driving xivDriving behaviors for adas and autonomous driving xiv
Driving behaviors for adas and autonomous driving xiv
 
Camera-Based Road Lane Detection by Deep Learning II
Camera-Based Road Lane Detection by Deep Learning IICamera-Based Road Lane Detection by Deep Learning II
Camera-Based Road Lane Detection by Deep Learning II
 
Pedestrian behavior/intention modeling for autonomous driving II
Pedestrian behavior/intention modeling for autonomous driving IIPedestrian behavior/intention modeling for autonomous driving II
Pedestrian behavior/intention modeling for autonomous driving II
 
Driving Behavior for ADAS and Autonomous Driving VIII
Driving Behavior for ADAS and Autonomous Driving VIIIDriving Behavior for ADAS and Autonomous Driving VIII
Driving Behavior for ADAS and Autonomous Driving VIII
 
Driving Behavior for ADAS and Autonomous Driving VII
Driving Behavior for ADAS and Autonomous Driving VIIDriving Behavior for ADAS and Autonomous Driving VII
Driving Behavior for ADAS and Autonomous Driving VII
 
Depth Fusion from RGB and Depth Sensors IV
Depth Fusion from RGB and Depth Sensors  IVDepth Fusion from RGB and Depth Sensors  IV
Depth Fusion from RGB and Depth Sensors IV
 
Deep Learning’s Application in Radar Signal Data II
Deep Learning’s Application in Radar Signal Data IIDeep Learning’s Application in Radar Signal Data II
Deep Learning’s Application in Radar Signal Data II
 
Pedestrian behavior/intention modeling for autonomous driving V
Pedestrian behavior/intention modeling for autonomous driving VPedestrian behavior/intention modeling for autonomous driving V
Pedestrian behavior/intention modeling for autonomous driving V
 
Driving Behavior for ADAS and Autonomous Driving VI
Driving Behavior for ADAS and Autonomous Driving VIDriving Behavior for ADAS and Autonomous Driving VI
Driving Behavior for ADAS and Autonomous Driving VI
 
3-d interpretation from stereo images for autonomous driving
3-d interpretation from stereo images for autonomous driving3-d interpretation from stereo images for autonomous driving
3-d interpretation from stereo images for autonomous driving
 
Deep Learning’s Application in Radar Signal Data
Deep Learning’s Application in Radar Signal DataDeep Learning’s Application in Radar Signal Data
Deep Learning’s Application in Radar Signal Data
 
Stereo Matching by Deep Learning
Stereo Matching by Deep LearningStereo Matching by Deep Learning
Stereo Matching by Deep Learning
 
Driving behaviors for adas and autonomous driving XI
Driving behaviors for adas and autonomous driving XIDriving behaviors for adas and autonomous driving XI
Driving behaviors for adas and autonomous driving XI
 
3-d interpretation from single 2-d image IV
3-d interpretation from single 2-d image IV3-d interpretation from single 2-d image IV
3-d interpretation from single 2-d image IV
 
Driving behaviors for adas and autonomous driving XIII
Driving behaviors for adas and autonomous driving XIIIDriving behaviors for adas and autonomous driving XIII
Driving behaviors for adas and autonomous driving XIII
 
Deep VO and SLAM IV
Deep VO and SLAM IVDeep VO and SLAM IV
Deep VO and SLAM IV
 

Similar to Fisheye-Omnidirectional View in Autonomous Driving III

Fisheye/Omnidirectional View in Autonomous Driving V
Fisheye/Omnidirectional View in Autonomous Driving VFisheye/Omnidirectional View in Autonomous Driving V
Fisheye/Omnidirectional View in Autonomous Driving V
Yu Huang
 
Fisheye/Omnidirectional View in Autonomous Driving IV
Fisheye/Omnidirectional View in Autonomous Driving IVFisheye/Omnidirectional View in Autonomous Driving IV
Fisheye/Omnidirectional View in Autonomous Driving IV
Yu Huang
 
Fisheye based Perception for Autonomous Driving VI
Fisheye based Perception for Autonomous Driving VIFisheye based Perception for Autonomous Driving VI
Fisheye based Perception for Autonomous Driving VI
Yu Huang
 
Fisheye Omnidirectional View in Autonomous Driving II
Fisheye Omnidirectional View in Autonomous Driving IIFisheye Omnidirectional View in Autonomous Driving II
Fisheye Omnidirectional View in Autonomous Driving II
Yu Huang
 
Video Stitching using Improved RANSAC and SIFT
Video Stitching using Improved RANSAC and SIFTVideo Stitching using Improved RANSAC and SIFT
Video Stitching using Improved RANSAC and SIFT
IRJET Journal
 
IRJET- Semantic Segmentation using Deep Learning
IRJET- Semantic Segmentation using Deep LearningIRJET- Semantic Segmentation using Deep Learning
IRJET- Semantic Segmentation using Deep Learning
IRJET Journal
 
An Experimental Analysis on Self Driving Car Using CNN
An Experimental Analysis on Self Driving Car Using CNNAn Experimental Analysis on Self Driving Car Using CNN
An Experimental Analysis on Self Driving Car Using CNN
IRJET Journal
 
Self-Driving Car to Drive Autonomously using Image Processing and Deep Learning
Self-Driving Car to Drive Autonomously using Image Processing and Deep LearningSelf-Driving Car to Drive Autonomously using Image Processing and Deep Learning
Self-Driving Car to Drive Autonomously using Image Processing and Deep Learning
IRJET Journal
 
IRJET- Location based Management of Profile
IRJET- Location based Management of ProfileIRJET- Location based Management of Profile
IRJET- Location based Management of Profile
IRJET Journal
 
The New Perception Framework in Autonomous Driving: An Introduction of BEV N...
The New Perception Framework  in Autonomous Driving: An Introduction of BEV N...The New Perception Framework  in Autonomous Driving: An Introduction of BEV N...
The New Perception Framework in Autonomous Driving: An Introduction of BEV N...
Yu Huang
 
IRJET- Robust and Fast Detection of Moving Vechiles in Aerial Videos usin...
IRJET-  	  Robust and Fast Detection of Moving Vechiles in Aerial Videos usin...IRJET-  	  Robust and Fast Detection of Moving Vechiles in Aerial Videos usin...
IRJET- Robust and Fast Detection of Moving Vechiles in Aerial Videos usin...
IRJET Journal
 
IRJET- Profile Management System
IRJET- Profile Management SystemIRJET- Profile Management System
IRJET- Profile Management System
IRJET Journal
 
An Assessment of Image Matching Algorithms in Depth Estimation
An Assessment of Image Matching Algorithms in Depth EstimationAn Assessment of Image Matching Algorithms in Depth Estimation
An Assessment of Image Matching Algorithms in Depth Estimation
CSCJournals
 
Юрій Іванов Тема: Програмні лічильники однотипних рухомих об’єктів: алгоритми...
Юрій Іванов Тема: Програмні лічильники однотипних рухомих об’єктів: алгоритми...Юрій Іванов Тема: Програмні лічильники однотипних рухомих об’єктів: алгоритми...
Юрій Іванов Тема: Програмні лічильники однотипних рухомих об’єктів: алгоритми...
Lviv Startup Club
 
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robot
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robotIn tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robot
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robot
Sudhakar Spartan
 
IRJET- Front View Identification of Vehicles by using Machine Learning Te...
IRJET-  	  Front View Identification of Vehicles by using Machine Learning Te...IRJET-  	  Front View Identification of Vehicles by using Machine Learning Te...
IRJET- Front View Identification of Vehicles by using Machine Learning Te...
IRJET Journal
 
Video Shot Boundary Detection Using The Scale Invariant Feature Transform and...
Video Shot Boundary Detection Using The Scale Invariant Feature Transform and...Video Shot Boundary Detection Using The Scale Invariant Feature Transform and...
Video Shot Boundary Detection Using The Scale Invariant Feature Transform and...
IJECEIAES
 
Ijciet 10 02_043
Ijciet 10 02_043Ijciet 10 02_043
Ijciet 10 02_043
IAEME Publication
 
Review On Different Feature Extraction Algorithms
Review On Different Feature Extraction AlgorithmsReview On Different Feature Extraction Algorithms
Review On Different Feature Extraction Algorithms
IRJET Journal
 
SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...
SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...
SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...
Kitsukawa Yuki
 

Similar to Fisheye-Omnidirectional View in Autonomous Driving III (20)

Fisheye/Omnidirectional View in Autonomous Driving V
Fisheye/Omnidirectional View in Autonomous Driving VFisheye/Omnidirectional View in Autonomous Driving V
Fisheye/Omnidirectional View in Autonomous Driving V
 
Fisheye/Omnidirectional View in Autonomous Driving IV
Fisheye/Omnidirectional View in Autonomous Driving IVFisheye/Omnidirectional View in Autonomous Driving IV
Fisheye/Omnidirectional View in Autonomous Driving IV
 
Fisheye based Perception for Autonomous Driving VI
Fisheye based Perception for Autonomous Driving VIFisheye based Perception for Autonomous Driving VI
Fisheye based Perception for Autonomous Driving VI
 
Fisheye Omnidirectional View in Autonomous Driving II
Fisheye Omnidirectional View in Autonomous Driving IIFisheye Omnidirectional View in Autonomous Driving II
Fisheye Omnidirectional View in Autonomous Driving II
 
Video Stitching using Improved RANSAC and SIFT
Video Stitching using Improved RANSAC and SIFTVideo Stitching using Improved RANSAC and SIFT
Video Stitching using Improved RANSAC and SIFT
 
IRJET- Semantic Segmentation using Deep Learning
IRJET- Semantic Segmentation using Deep LearningIRJET- Semantic Segmentation using Deep Learning
IRJET- Semantic Segmentation using Deep Learning
 
An Experimental Analysis on Self Driving Car Using CNN
An Experimental Analysis on Self Driving Car Using CNNAn Experimental Analysis on Self Driving Car Using CNN
An Experimental Analysis on Self Driving Car Using CNN
 
Self-Driving Car to Drive Autonomously using Image Processing and Deep Learning
Self-Driving Car to Drive Autonomously using Image Processing and Deep LearningSelf-Driving Car to Drive Autonomously using Image Processing and Deep Learning
Self-Driving Car to Drive Autonomously using Image Processing and Deep Learning
 
IRJET- Location based Management of Profile
IRJET- Location based Management of ProfileIRJET- Location based Management of Profile
IRJET- Location based Management of Profile
 
The New Perception Framework in Autonomous Driving: An Introduction of BEV N...
The New Perception Framework  in Autonomous Driving: An Introduction of BEV N...The New Perception Framework  in Autonomous Driving: An Introduction of BEV N...
The New Perception Framework in Autonomous Driving: An Introduction of BEV N...
 
IRJET- Robust and Fast Detection of Moving Vechiles in Aerial Videos usin...
IRJET-  	  Robust and Fast Detection of Moving Vechiles in Aerial Videos usin...IRJET-  	  Robust and Fast Detection of Moving Vechiles in Aerial Videos usin...
IRJET- Robust and Fast Detection of Moving Vechiles in Aerial Videos usin...
 
IRJET- Profile Management System
IRJET- Profile Management SystemIRJET- Profile Management System
IRJET- Profile Management System
 
An Assessment of Image Matching Algorithms in Depth Estimation
An Assessment of Image Matching Algorithms in Depth EstimationAn Assessment of Image Matching Algorithms in Depth Estimation
An Assessment of Image Matching Algorithms in Depth Estimation
 
Юрій Іванов Тема: Програмні лічильники однотипних рухомих об’єктів: алгоритми...
Юрій Іванов Тема: Програмні лічильники однотипних рухомих об’єктів: алгоритми...Юрій Іванов Тема: Програмні лічильники однотипних рухомих об’єктів: алгоритми...
Юрій Іванов Тема: Програмні лічильники однотипних рухомих об’єктів: алгоритми...
 
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robot
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robotIn tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robot
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robot
 
IRJET- Front View Identification of Vehicles by using Machine Learning Te...
IRJET-  	  Front View Identification of Vehicles by using Machine Learning Te...IRJET-  	  Front View Identification of Vehicles by using Machine Learning Te...
IRJET- Front View Identification of Vehicles by using Machine Learning Te...
 
Video Shot Boundary Detection Using The Scale Invariant Feature Transform and...
Video Shot Boundary Detection Using The Scale Invariant Feature Transform and...Video Shot Boundary Detection Using The Scale Invariant Feature Transform and...
Video Shot Boundary Detection Using The Scale Invariant Feature Transform and...
 
Ijciet 10 02_043
Ijciet 10 02_043Ijciet 10 02_043
Ijciet 10 02_043
 
Review On Different Feature Extraction Algorithms
Review On Different Feature Extraction AlgorithmsReview On Different Feature Extraction Algorithms
Review On Different Feature Extraction Algorithms
 
SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...
SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...
SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...
 

More from Yu Huang

Application of Foundation Model for Autonomous Driving
Application of Foundation Model for Autonomous DrivingApplication of Foundation Model for Autonomous Driving
Application of Foundation Model for Autonomous Driving
Yu Huang
 
Data Closed Loop in Simulation Test of Autonomous Driving
Data Closed Loop in Simulation Test of Autonomous DrivingData Closed Loop in Simulation Test of Autonomous Driving
Data Closed Loop in Simulation Test of Autonomous Driving
Yu Huang
 
Techniques and Challenges in Autonomous Driving
Techniques and Challenges in Autonomous DrivingTechniques and Challenges in Autonomous Driving
Techniques and Challenges in Autonomous Driving
Yu Huang
 
BEV Joint Detection and Segmentation
BEV Joint Detection and SegmentationBEV Joint Detection and Segmentation
BEV Joint Detection and Segmentation
Yu Huang
 
BEV Object Detection and Prediction
BEV Object Detection and PredictionBEV Object Detection and Prediction
BEV Object Detection and Prediction
Yu Huang
 
Prediction,Planninng & Control at Baidu
Prediction,Planninng & Control at BaiduPrediction,Planninng & Control at Baidu
Prediction,Planninng & Control at Baidu
Yu Huang
 
Cruise AI under the Hood
Cruise AI under the HoodCruise AI under the Hood
Cruise AI under the Hood
Yu Huang
 
LiDAR in the Adverse Weather: Dust, Snow, Rain and Fog (2)
LiDAR in the Adverse Weather: Dust, Snow, Rain and Fog (2)LiDAR in the Adverse Weather: Dust, Snow, Rain and Fog (2)
LiDAR in the Adverse Weather: Dust, Snow, Rain and Fog (2)
Yu Huang
 
Scenario-Based Development & Testing for Autonomous Driving
Scenario-Based Development & Testing for Autonomous DrivingScenario-Based Development & Testing for Autonomous Driving
Scenario-Based Development & Testing for Autonomous Driving
Yu Huang
 
How to Build a Data Closed-loop Platform for Autonomous Driving?
How to Build a Data Closed-loop Platform for Autonomous Driving?How to Build a Data Closed-loop Platform for Autonomous Driving?
How to Build a Data Closed-loop Platform for Autonomous Driving?
Yu Huang
 
Annotation tools for ADAS & Autonomous Driving
Annotation tools for ADAS & Autonomous DrivingAnnotation tools for ADAS & Autonomous Driving
Annotation tools for ADAS & Autonomous Driving
Yu Huang
 
Simulation for autonomous driving at uber atg
Simulation for autonomous driving at uber atgSimulation for autonomous driving at uber atg
Simulation for autonomous driving at uber atg
Yu Huang
 
Multi sensor calibration by deep learning
Multi sensor calibration by deep learningMulti sensor calibration by deep learning
Multi sensor calibration by deep learning
Yu Huang
 
Data pipeline and data lake for autonomous driving
Data pipeline and data lake for autonomous drivingData pipeline and data lake for autonomous driving
Data pipeline and data lake for autonomous driving
Yu Huang
 
Open Source codes of trajectory prediction & behavior planning
Open Source codes of trajectory prediction & behavior planningOpen Source codes of trajectory prediction & behavior planning
Open Source codes of trajectory prediction & behavior planning
Yu Huang
 
Lidar in the adverse weather: dust, fog, snow and rain
Lidar in the adverse weather: dust, fog, snow and rainLidar in the adverse weather: dust, fog, snow and rain
Lidar in the adverse weather: dust, fog, snow and rain
Yu Huang
 
Autonomous Driving of L3/L4 Commercial trucks
Autonomous Driving of L3/L4 Commercial trucksAutonomous Driving of L3/L4 Commercial trucks
Autonomous Driving of L3/L4 Commercial trucks
Yu Huang
 
3-d interpretation from single 2-d image V
3-d interpretation from single 2-d image V3-d interpretation from single 2-d image V
3-d interpretation from single 2-d image V
Yu Huang
 
3-d interpretation from single 2-d image III
3-d interpretation from single 2-d image III3-d interpretation from single 2-d image III
3-d interpretation from single 2-d image III
Yu Huang
 
Unsupervised/Self-supervvised visual object tracking
Unsupervised/Self-supervvised visual object trackingUnsupervised/Self-supervvised visual object tracking
Unsupervised/Self-supervvised visual object tracking
Yu Huang
 

More from Yu Huang (20)

Application of Foundation Model for Autonomous Driving
Application of Foundation Model for Autonomous DrivingApplication of Foundation Model for Autonomous Driving
Application of Foundation Model for Autonomous Driving
 
Data Closed Loop in Simulation Test of Autonomous Driving
Data Closed Loop in Simulation Test of Autonomous DrivingData Closed Loop in Simulation Test of Autonomous Driving
Data Closed Loop in Simulation Test of Autonomous Driving
 
Techniques and Challenges in Autonomous Driving
Techniques and Challenges in Autonomous DrivingTechniques and Challenges in Autonomous Driving
Techniques and Challenges in Autonomous Driving
 
BEV Joint Detection and Segmentation
BEV Joint Detection and SegmentationBEV Joint Detection and Segmentation
BEV Joint Detection and Segmentation
 
BEV Object Detection and Prediction
BEV Object Detection and PredictionBEV Object Detection and Prediction
BEV Object Detection and Prediction
 
Prediction,Planninng & Control at Baidu
Prediction,Planninng & Control at BaiduPrediction,Planninng & Control at Baidu
Prediction,Planninng & Control at Baidu
 
Cruise AI under the Hood
Cruise AI under the HoodCruise AI under the Hood
Cruise AI under the Hood
 
LiDAR in the Adverse Weather: Dust, Snow, Rain and Fog (2)
LiDAR in the Adverse Weather: Dust, Snow, Rain and Fog (2)LiDAR in the Adverse Weather: Dust, Snow, Rain and Fog (2)
LiDAR in the Adverse Weather: Dust, Snow, Rain and Fog (2)
 
Scenario-Based Development & Testing for Autonomous Driving
Scenario-Based Development & Testing for Autonomous DrivingScenario-Based Development & Testing for Autonomous Driving
Scenario-Based Development & Testing for Autonomous Driving
 
How to Build a Data Closed-loop Platform for Autonomous Driving?
How to Build a Data Closed-loop Platform for Autonomous Driving?How to Build a Data Closed-loop Platform for Autonomous Driving?
How to Build a Data Closed-loop Platform for Autonomous Driving?
 
Annotation tools for ADAS & Autonomous Driving
Annotation tools for ADAS & Autonomous DrivingAnnotation tools for ADAS & Autonomous Driving
Annotation tools for ADAS & Autonomous Driving
 
Simulation for autonomous driving at uber atg
Simulation for autonomous driving at uber atgSimulation for autonomous driving at uber atg
Simulation for autonomous driving at uber atg
 
Multi sensor calibration by deep learning
Multi sensor calibration by deep learningMulti sensor calibration by deep learning
Multi sensor calibration by deep learning
 
Data pipeline and data lake for autonomous driving
Data pipeline and data lake for autonomous drivingData pipeline and data lake for autonomous driving
Data pipeline and data lake for autonomous driving
 
Open Source codes of trajectory prediction & behavior planning
Open Source codes of trajectory prediction & behavior planningOpen Source codes of trajectory prediction & behavior planning
Open Source codes of trajectory prediction & behavior planning
 
Lidar in the adverse weather: dust, fog, snow and rain
Lidar in the adverse weather: dust, fog, snow and rainLidar in the adverse weather: dust, fog, snow and rain
Lidar in the adverse weather: dust, fog, snow and rain
 
Autonomous Driving of L3/L4 Commercial trucks
Autonomous Driving of L3/L4 Commercial trucksAutonomous Driving of L3/L4 Commercial trucks
Autonomous Driving of L3/L4 Commercial trucks
 
3-d interpretation from single 2-d image V
3-d interpretation from single 2-d image V3-d interpretation from single 2-d image V
3-d interpretation from single 2-d image V
 
3-d interpretation from single 2-d image III
3-d interpretation from single 2-d image III3-d interpretation from single 2-d image III
3-d interpretation from single 2-d image III
 
Unsupervised/Self-supervvised visual object tracking
Unsupervised/Self-supervvised visual object trackingUnsupervised/Self-supervvised visual object tracking
Unsupervised/Self-supervvised visual object tracking
 

Recently uploaded

Fundamentals of Induction Motor Drives.pptx
Fundamentals of Induction Motor Drives.pptxFundamentals of Induction Motor Drives.pptx
Fundamentals of Induction Motor Drives.pptx
manasideore6
 
一比一原版(Otago毕业证)奥塔哥大学毕业证成绩单如何办理
一比一原版(Otago毕业证)奥塔哥大学毕业证成绩单如何办理一比一原版(Otago毕业证)奥塔哥大学毕业证成绩单如何办理
一比一原版(Otago毕业证)奥塔哥大学毕业证成绩单如何办理
dxobcob
 
Online aptitude test management system project report.pdf
Online aptitude test management system project report.pdfOnline aptitude test management system project report.pdf
Online aptitude test management system project report.pdf
Kamal Acharya
 
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
zwunae
 
ACEP Magazine edition 4th launched on 05.06.2024
ACEP Magazine edition 4th launched on 05.06.2024ACEP Magazine edition 4th launched on 05.06.2024
ACEP Magazine edition 4th launched on 05.06.2024
Rahul
 
Hierarchical Digital Twin of a Naval Power System
Hierarchical Digital Twin of a Naval Power SystemHierarchical Digital Twin of a Naval Power System
Hierarchical Digital Twin of a Naval Power System
Kerry Sado
 
spirit beverages ppt without graphics.pptx
spirit beverages ppt without graphics.pptxspirit beverages ppt without graphics.pptx
spirit beverages ppt without graphics.pptx
Madan Karki
 
BPV-GUI-01-Guide-for-ASME-Review-Teams-(General)-10-10-2023.pdf
BPV-GUI-01-Guide-for-ASME-Review-Teams-(General)-10-10-2023.pdfBPV-GUI-01-Guide-for-ASME-Review-Teams-(General)-10-10-2023.pdf
BPV-GUI-01-Guide-for-ASME-Review-Teams-(General)-10-10-2023.pdf
MIGUELANGEL966976
 
A review on techniques and modelling methodologies used for checking electrom...
A review on techniques and modelling methodologies used for checking electrom...A review on techniques and modelling methodologies used for checking electrom...
A review on techniques and modelling methodologies used for checking electrom...
nooriasukmaningtyas
 
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&BDesign and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Sreedhar Chowdam
 
Modelagem de um CSTR com reação endotermica.pdf
Modelagem de um CSTR com reação endotermica.pdfModelagem de um CSTR com reação endotermica.pdf
Modelagem de um CSTR com reação endotermica.pdf
camseq
 
一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理
一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理
一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理
ydteq
 
一比一原版(SFU毕业证)西蒙菲莎大学毕业证成绩单如何办理
一比一原版(SFU毕业证)西蒙菲莎大学毕业证成绩单如何办理一比一原版(SFU毕业证)西蒙菲莎大学毕业证成绩单如何办理
一比一原版(SFU毕业证)西蒙菲莎大学毕业证成绩单如何办理
bakpo1
 
原版制作(unimelb毕业证书)墨尔本大学毕业证Offer一模一样
原版制作(unimelb毕业证书)墨尔本大学毕业证Offer一模一样原版制作(unimelb毕业证书)墨尔本大学毕业证Offer一模一样
原版制作(unimelb毕业证书)墨尔本大学毕业证Offer一模一样
obonagu
 
Nuclear Power Economics and Structuring 2024
Nuclear Power Economics and Structuring 2024Nuclear Power Economics and Structuring 2024
Nuclear Power Economics and Structuring 2024
Massimo Talia
 
Low power architecture of logic gates using adiabatic techniques
Low power architecture of logic gates using adiabatic techniquesLow power architecture of logic gates using adiabatic techniques
Low power architecture of logic gates using adiabatic techniques
nooriasukmaningtyas
 
一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单专业办理
一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单专业办理一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单专业办理
一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单专业办理
zwunae
 
Understanding Inductive Bias in Machine Learning
Understanding Inductive Bias in Machine LearningUnderstanding Inductive Bias in Machine Learning
Understanding Inductive Bias in Machine Learning
SUTEJAS
 
14 Template Contractual Notice - EOT Application
14 Template Contractual Notice - EOT Application14 Template Contractual Notice - EOT Application
14 Template Contractual Notice - EOT Application
SyedAbiiAzazi1
 
Ethernet Routing and switching chapter 1.ppt
Ethernet Routing and switching chapter 1.pptEthernet Routing and switching chapter 1.ppt
Ethernet Routing and switching chapter 1.ppt
azkamurat
 

Recently uploaded (20)

Fundamentals of Induction Motor Drives.pptx
Fundamentals of Induction Motor Drives.pptxFundamentals of Induction Motor Drives.pptx
Fundamentals of Induction Motor Drives.pptx
 
一比一原版(Otago毕业证)奥塔哥大学毕业证成绩单如何办理
一比一原版(Otago毕业证)奥塔哥大学毕业证成绩单如何办理一比一原版(Otago毕业证)奥塔哥大学毕业证成绩单如何办理
一比一原版(Otago毕业证)奥塔哥大学毕业证成绩单如何办理
 
Online aptitude test management system project report.pdf
Online aptitude test management system project report.pdfOnline aptitude test management system project report.pdf
Online aptitude test management system project report.pdf
 
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
 
ACEP Magazine edition 4th launched on 05.06.2024
ACEP Magazine edition 4th launched on 05.06.2024ACEP Magazine edition 4th launched on 05.06.2024
ACEP Magazine edition 4th launched on 05.06.2024
 
Hierarchical Digital Twin of a Naval Power System
Hierarchical Digital Twin of a Naval Power SystemHierarchical Digital Twin of a Naval Power System
Hierarchical Digital Twin of a Naval Power System
 
spirit beverages ppt without graphics.pptx
spirit beverages ppt without graphics.pptxspirit beverages ppt without graphics.pptx
spirit beverages ppt without graphics.pptx
 
BPV-GUI-01-Guide-for-ASME-Review-Teams-(General)-10-10-2023.pdf
BPV-GUI-01-Guide-for-ASME-Review-Teams-(General)-10-10-2023.pdfBPV-GUI-01-Guide-for-ASME-Review-Teams-(General)-10-10-2023.pdf
BPV-GUI-01-Guide-for-ASME-Review-Teams-(General)-10-10-2023.pdf
 
A review on techniques and modelling methodologies used for checking electrom...
A review on techniques and modelling methodologies used for checking electrom...A review on techniques and modelling methodologies used for checking electrom...
A review on techniques and modelling methodologies used for checking electrom...
 
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&BDesign and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
 
Modelagem de um CSTR com reação endotermica.pdf
Modelagem de um CSTR com reação endotermica.pdfModelagem de um CSTR com reação endotermica.pdf
Modelagem de um CSTR com reação endotermica.pdf
 
一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理
一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理
一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理
 
一比一原版(SFU毕业证)西蒙菲莎大学毕业证成绩单如何办理
一比一原版(SFU毕业证)西蒙菲莎大学毕业证成绩单如何办理一比一原版(SFU毕业证)西蒙菲莎大学毕业证成绩单如何办理
一比一原版(SFU毕业证)西蒙菲莎大学毕业证成绩单如何办理
 
原版制作(unimelb毕业证书)墨尔本大学毕业证Offer一模一样
原版制作(unimelb毕业证书)墨尔本大学毕业证Offer一模一样原版制作(unimelb毕业证书)墨尔本大学毕业证Offer一模一样
原版制作(unimelb毕业证书)墨尔本大学毕业证Offer一模一样
 
Nuclear Power Economics and Structuring 2024
Nuclear Power Economics and Structuring 2024Nuclear Power Economics and Structuring 2024
Nuclear Power Economics and Structuring 2024
 
Low power architecture of logic gates using adiabatic techniques
Low power architecture of logic gates using adiabatic techniquesLow power architecture of logic gates using adiabatic techniques
Low power architecture of logic gates using adiabatic techniques
 
一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单专业办理
一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单专业办理一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单专业办理
一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单专业办理
 
Understanding Inductive Bias in Machine Learning
Understanding Inductive Bias in Machine LearningUnderstanding Inductive Bias in Machine Learning
Understanding Inductive Bias in Machine Learning
 
14 Template Contractual Notice - EOT Application
14 Template Contractual Notice - EOT Application14 Template Contractual Notice - EOT Application
14 Template Contractual Notice - EOT Application
 
Ethernet Routing and switching chapter 1.ppt
Ethernet Routing and switching chapter 1.pptEthernet Routing and switching chapter 1.ppt
Ethernet Routing and switching chapter 1.ppt
 

Fisheye-Omnidirectional View in Autonomous Driving III

  • 1. Fisheye/Omnidirectional View in Autonomous Driving III YuHuang Yu.huang07@gmail.com Sunnyvale,California
  • 2. Outline • DS-PASS: Detail-Sensitive Panoramic Annular Semantic Segmentation through SwaftNet for Surrounding Sensing • The OmniScape Dataset (ICRA’2020) • Universal Semantic Segmentation for Fisheye Urban Driving Images • Vehicle Re-ID for Surround-view Camera System • SynDistNet: Self-Supervised Monocular Fisheye Camera Distance Estimation Synergized with Semantic Segmentation for Autonomous Driving • Towards Autonomous Driving: a Multi-Modal 360 Perception Proposal
  • 3. DS-PASS: Detail-Sensitive Panoramic Annular Semantic Segmentation through SwaftNet for Surrounding Sensing • In this paper, propose a network adaptation framework to achieve Panoramic Annular Semantic Segmentation (PASS), which allows to re-use conventional pinhole-view image datasets, enabling modern segmentation networks to comfortably adapt to panoramic images. • Specifically, adapt our proposed SwaftNet to enhance the sensitivity to details by implementing attention-based latera connections between the detail-critical encoder layers and the context-critical decoder layers. It benchmarks the performance of efficient segmenters on panoramic segmentation with an extended PASS dataset, demonstrating that the proposed realtime SwaftNet outperforms state-of-the-art efficient networks. • Furthermore, assess real-world performance when deploying the Detail-Sensitive PASS (DS-PASS) system on a mobile robot and an instrumented vehicle, as well as the benefit of panoramic semantics for visual odometry, showing the robustness and potential to support diverse navigational applications.
  • 4. DS-PASS: Detail-Sensitive Panoramic Annular Semantic Segmentation through SwaftNet for Surrounding Sensing Panoramic annular semantic segmentation. On the left: raw annular image; First row on the right: unfolded panorama; Second row: panoramic segmentation of the baseline method, the classification heatmap of pedestrian is blurry; Third row: detail-sensitive panoramic segmentation of the proposed method, the heatmap and semantic map are detail-preserved.
  • 5. DS-PASS: Detail-Sensitive Panoramic Annular Semantic Segmentation through SwaftNet for Surrounding Sensing The proposed framework for panoramic annular semantic segmentation. Each feature model (corresponding to the single feature model like encoder in conventional architectures) is responsible for predicting the semantically-meaningful high-level feature map of a panorama segment while interacting with the neighboring ones through cross-segment padding (indicated by the dotted arrows). Fusion model incorporates the feature maps and completes the panoramic segmentation. The proposed architecture follows the single- scale model of SwiftNet, based on an U- shape structure like Unet and LinkNet.
  • 6. DS-PASS: Detail-Sensitive Panoramic Annular Semantic Segmentation through SwaftNet for Surrounding Sensing The proposed architecture with attention-based lateral connections to blend semantically- rich deep layers with spatially-detailed shallow layers. The down-sampling path with the SPP module (encoder) corresponds to the feature model in last figure, while the up-sampling path (decoder) corresponds to the fusion model
  • 7. DS-PASS: Detail-Sensitive Panoramic Annular Semantic Segmentation through SwaftNet for Surrounding Sensing
  • 8. The OmniScape Dataset • Despite the utility and benefits of omnidirectional images in robotics and automotive applications, there are no datasets of omnidirectional images available with semantic segmentation, depth map, and dynamic properties. • This is due to the time cost and human effort required to annotate ground truth images. • This paper presents a framework for generating omnidirectional images using images that are acquired from a virtual environment. • For this purpose, it demonstrates the relevance of the proposed framework on two well-known simulators: CARLA Simulator, which is an open-source simulator for autonomous driving research, and Grand Theft Auto V(GTA V), which is a very high quality video game. • It explains in details the generated OmniScape dataset, which includes stereo fisheye and catadioptric images acquired from the two front sides of a motorcycle, including semantic segmentation, depth map, intrinsic parameters of the cameras and the dynamic parameters of the motorcycle. • It is worth noting that the case of two-wheeled vehicles is more challenging than cars due to the specific dynamic of these vehicles.
  • 9. The OmniScape Dataset Recording platform and a representation of the different modalities
  • 10. The OmniScape Dataset Lookup table construction to set the omnidirectional image pixel values
  • 11. The OmniScape Dataset The omnidirectional camera model
  • 13. Universal Semantic Segmentation for Fisheye Urban Driving Images • When performing semantic image segmentation, a wider field of view (FoV) helps to obtain more information about the surrounding environment, making automatic driving safer and more reliable, which could be offered by fisheye cameras. • However, large public fisheye datasets are not available, and the fisheye images captured by the fisheye camera with large FoV comes with large distortion, so commonly-used semantic segmentation model cannot be directly utilized. • In this paper, a 7 DoF augmentation method is proposed to transform rectilinear image to fisheye image in a more comprehensive way. • In training, rectilinear images are transformed into fisheye images in 7 DoF, which simulates the fisheye images from different positions, orientations and focal lengths. • The result shows that training with the seven-DoF augmentation can improve the models accuracy and robustness against different distorted fisheye data. • This seven-DoF augmentation provides a universal semantic segmentation solution for fisheye cameras in different autonomous driving applications. • The code and configurations are released at https://github.com/Yaozhuwa/FisheyeSeg.
  • 14. Universal Semantic Segmentation for Fisheye Urban Driving Images Projection model of fisheye camera. PW is a point on a rectilinear image that we place on the x-y plane of the world coordinate system. Ɵ is the Angle of incidence of the point relative to the fisheye camera. P is the imaging point of PW on the fisheye image. |OP| = fƟ. The relative rotation and translation between the world coordinate system and the camera coordinate system results in six degrees of freedom.
  • 15. Universal Semantic Segmentation for Fisheye Urban Driving Images The six DoF augmentation. Except the first row, every image is transformed using a virtual fisheye camera with focal length of 300 pixels. The letter in brackets means that which axis the camera is panning along or rotating around.
  • 16. Universal Semantic Segmentation for Fisheye Urban Driving Images the synthetic fisheye images with different f(focal length)
  • 17. Universal Semantic Segmentation for Fisheye Urban Driving Images 1. Base Aug: random clipping + random flip + color jitter + z-aug of fixed focal length 2. RandF Aug: Base Aug + random focal length 3. RandR Aug: Base Aug + random rotation 4. RandT Aug: Base Aug + random translation 5. RandFR Aug: Base Aug + random focal length + random rotation 6. RandFT Aug: Base Aug + random focal length + random translation 7. Six-DoF Aug: Base Aug + random rotation + random translation 8. Seven-DoF Aug: Base Aug + random focal length + random rotation + random translation Seven-DoF Augmentation
  • 18. Vehicle Re-ID for Surround-view Camera System • The vehicle re-identification (Re-ID) plays a critical role in the perception system of autonomous driving, which attracts more and more attention in recent years. • However, there is no existing complete solution for the surround-view system mounted on the vehicle. • Two main challenges in above scenario: i) In single-camera view, it is difficult to recognize the same vehicle from the past image frames due to the fish-eye distortion, occlusion, truncation, etc. ii) In multi-camera view, the appearance of the same vehicle varies greatly from different cameras viewpoints. • Thus, an integral vehicle Re-ID solution to address these problems. • Specifically, a quality evaluation mechanism to balance the effect of tracking boxes drift and targets consistence. • Besides, take advantage of the Re-ID network based on attention mechanism, then combined with a spatial constraint strategy to further boost the performance between different cameras. • It will release the code and annotated fisheye dataset for the benefit of community.
  • 19. Vehicle Re-ID for Surround-view Camera System 360 surround-view camera system. Each arrow points to an image captured by the corresponding camera.
  • 20. Vehicle Re-ID for Surround-view Camera System Vehicles in single view of fisheye camera. (a) The same vehicle features change dramatically in consecutive frames and vehicles tend to obscure each other. (b) Matching errors are caused by tracking results. (c) The vehicle center indicated by the orange box is stable while the IoU in consecutive frames indicated by the yellow box decreases with movement.
  • 21. Vehicle Re-ID for Surround-view Camera System The overall framework of vehicle Re-ID in single camera. Each object is assigned a single tracker to realize Re-ID in single channel. Tracking templates are initialized with object detection results. All tracking outputs are post-processed by the quality evaluation module to deal with the distorted or occluded objects.
  • 22. Vehicle Re-ID for Surround-view Camera System The overall framework of the vehicle Re-ID in multi-camera. For the new target, Re-ID model is used first to extract the features, followed by the distance metrics is carried out for this feature and features in gallery. Besides, the spatial constraint strategy is adopted to improve the correlation effect.
  • 23. Vehicle Re-ID for Surround-view Camera System Samples captured by different cameras. (a) The appearances of the same vehicle captured by different cameras vary greatly, and the same color represents the same object. (b) Objects have a similar appearance may appear in the same camera view, as shown by these two black vehicles in green boxes.
  • 24. Vehicle Re-ID for Surround-view Camera System Illustration of the multi-camera Re-ID network. This network is a 2 branch parallel structure. The top branch is employed to make the network pay more attention on object regions, and anther is for extracting global features.
  • 25. Vehicle Re-ID for Surround-view Camera System Projection uncertainty of key points. Ellipse 1 and ellipse 2 are uncertainty ranges of front and left (right) cameras, respectively.
  • 26. Vehicle Re-ID for Surround-view Camera System
  • 27. SynDistNet: Self-Supervised Monocular Fisheye Camera Distance Estimation Synergized with Semantic Segmentation for Autonomous Driving • In this paper, introduce a novel multi-task learning strategy to improve self supervised monocular distance estimation on fisheye and pinhole camera images. • The contribution to this work is threefold: • Firstly, we introduce a novel distance estimation network architecture using a self-attention based encoder coupled with robust semantic feature guidance to the decoder that can be trained in a one-stage fashion. • Secondly, we integrate a generalized robust loss function, which improves performance significantly while removing the need for hyperparameter tuning with the reprojection loss. • Finally, we reduce the artifacts caused by dynamic objects violating static world assumption by using a semantic masking strategy. • As there is limited work on fisheye cameras, it is evaluated on KITTI using a pinhole model. • It achieved state-of-the-art performance among self-supervised methods without requiring an external scale estimation.
  • 28. SynDistNet: Self-Supervised Monocular Fisheye Camera Distance Estimation Synergized with Semantic Segmentation for Autonomous Driving Overview over the joint prediction of distance ^Dt and semantic segmentation Mt from a single input image It. Compared to previous approaches, the semantically guided distance estimation produces sharper depth edges and reasonable distance estimates for dynamic objects.
  • 29. SynDistNet: Self-Supervised Monocular Fisheye Camera Distance Estimation Synergized with Semantic Segmentation for Autonomous Driving • The self-supervised depth and distance estimation is developed within a self- supervised monocular structure-from-motion (SfM) framework which requires two networks aiming at learning: • 1. a monocular depth/distance model gD : It -> ^Dt predicting a scale-ambiguous depth or distance (the equivalent of depth for general image geometries) ^Dt = gD(It(ij)) per pixel ij in the target image It; • 2. an ego-motion predictor gT : (It; It’ ) -> Tt->t0 predicting a set of 6 degrees of freedom which implement a rigid transformation Tt->t’ ∊ SE(3), between the target image It and the set of reference images It’. Typically, t’ ∊ {t + 1; t – 1}, i.e. the frames It-1 and It+1 are used as reference images, although using a larger window is possible.
  • 30. SynDistNet: Self-Supervised Monocular Fisheye Camera Distance Estimation Synergized with Semantic Segmentation for Autonomous Driving Overview of proposed framework for the joint prediction of distance and semantic segmentation. The upper part (blue blocks) describes the single steps for the depth estimation, while the green blocks describe the single steps needed for the prediction of the semantic segmentation.
  • 31. SynDistNet: Self-Supervised Monocular Fisheye Camera Distance Estimation Synergized with Semantic Segmentation for Autonomous Driving Visualization of proposed network architecture to semantically guide the depth estimation. They utilize a self-attention-based encoder and a semantically guided decoder using pixel-adaptive convolutions.
  • 32. SynDistNet: Self-Supervised Monocular Fisheye Camera Distance Estimation Synergized with Semantic Segmentation for Autonomous Driving Quantitative performance comparison of network with other self-supervised monocular methods for depths up to 80m for KITTI. Original uses raw depth maps for evaluation, and Improved uses annotated depth maps. At test-time, all methods excluding FisheyeDistanceNet, PackNet-SfM and this method, scale the estimated depths using median ground-truth LiDAR depth.
  • 33. SynDistNet: Self-Supervised Monocular Fisheye Camera Distance Estimation Synergized with Semantic Segmentation for Autonomous Driving Qualitative result comparison on the Fisheye WoodScape dataset between the baseline model without contributions and the proposed SynDistNet. SynDistNet can recover the distance of dynamic objects (left images) which eventually solves the infinite distance issue. In the 3rd and 4th columns, can see that semantic guidance helps to recover the thin structure and resolve the distance of homogeneous areas outputting sharp distance maps on raw fisheye images.
  • 34. Towards Autonomous Driving: a Multi-Modal 360 Perception Proposal • A multi-modal 360 framework for 3D object detection and tracking for autonomous vehicles is presented. • The process is divided into four main stages. • First, images are fed into a CNN network to obtain instance segmentation of the surrounding road participants. • Second, LiDAR-to-image association is performed for the estimated mask proposals. • Then, the isolated points of every object are processed by a PointNet ensemble to compute their corresponding 3D bounding boxes and poses. • A tracking stage based on Unscented Kalman Filter is used to track the agents along time. • The solution, based on a sensor fusion configuration, provides accurate and reliable road environment detection. • A wide variety of tests of the system, deployed in an autonomous vehicle, have successfully assessed the suitability of the proposed perception stack in a real autonomous driving application.
  • 35. Towards Autonomous Driving: a Multi-Modal 360 Perception Proposal • The following sensors are employed: • Five CMOS cameras equipped with an 85-HFOV lens. • A 32-layer LiDAR scanner featuring a minimum vertical resolution of 0.33 and a range of 200m (Velodyne Ultra Puck). • Accurate synchronization and calibration between sensors are of paramount importance. • Hence, they all are synchronized with the clock provided by a GPS receiver, and cameras are externally triggered at a 10 Hz rate. • Regarding calibration, cameras’ intrinsic parameters are obtained through the checkerboard-based approach by Zhang, and extrinsic parameters representing the relative position between sensors are estimated through a monocular-ready variant of the velo2cam method. • The result of this automatic procedure is further validated by visual inspection.
  • 36. Towards Autonomous Driving: a Multi-Modal 360 Perception Proposal • The proposed solution is based on three pillars. • First, visual data is employed to perform detection and instance level semantic segmentation. • Then, LiDAR points whose image projection falls within each obstacle bounding polygon are employed to estimate its 3D pose. • Finally, the tracking stage provides consistency, thus mitigating occasional misdetections and enabling trajectory prediction. • The combination of these three stages allows accurate and robust identification of the dynamic agents surrounding the vehicle.
  • 37. Towards Autonomous Driving: a Multi-Modal 360 Perception Proposal System overview. Images from all the cameras are processed by individual instances of Mask R- CNN, which provide detections endowed with a semantic mask. LiDAR points in these regions are used as an input for several F-PointNets responsible for estimating a 3D bounding box and estimate its position with respect to the car. Then, 3D detections from each camera are fused using an NMS procedure. A subsequent tracking stage provides consistency across several frames and avoids temporary misdetections.
  • 38. Towards Autonomous Driving: a Multi-Modal 360 Perception Proposal Qualitative results of the proposed system on some typical traffic scenarios. From top to bottom: 3D detections in rear-left, front-left, front, front-right, and rear-right cameras, and Bird’s Eye View representation.