SlideShare una empresa de Scribd logo
1 de 80
Descargar para leer sin conexión
A Three-Dimensional
Representation method for
Noisy Point Clouds based on
Growing Self-Organizing Maps
accelerated on GPUs
Author:

Sergio Orts Escolano

Supervisors: Dr. José García Rodríguez
Dr. Miguel Ángel Cazorla Quevedo
Doctoral programme in technologies for the information society
Outline







Introduction
3D Representation using Growing Self-Organizing Maps
Improving keypoint detection from noisy 3D
observations
GPGPU parallel implementations
Applications
Conclusions

2/79
 Introduction
•
•
•
•







Index

Motivation
Framework
Goals

Proposal

3D representation using growing self-organizing maps
Improving keypoint detection from noisy 3D observations

GPGPU parallel Implementations
Applications
Conclusions

3/79
Introduction

Motivation

Motivation


Most computer vision problems require the use of an
effective way of representation

•
•

Graphs, regions of interest (ROI), B-Splines, Octrees,
histograms, …
Key step for later processing stages: feature extraction,
feature matching, classification, keypoint detection, …

4/79
Introduction

Motivation

Motivation (II)







3D data captured from the real world

•

Implicitly is comprised of complex structures and nonlinear models

The advent of low cost 3D sensors

•
•
•

E.g. Microsoft Kinect, Asus Xtion, PrimeSense Carmine, …
RGB and Depth (RGB-D) streams (25 frames per second (fps))
High levels of noise and outliers

Only few works in 3D computer vision deal with real-time
constraints

•

3D data processing algorithms are computationally expensive

Finding a 3D model with different features:

•
•
•
•

Rapid adaptation
Good quality of representation (topology preservation)
Flexibility (non-stationary data)
Robust to noisy data and outliers
5/79
Introduction



Motivation

Motivation (III)
3D models of objects and scenes have been extensively used in
computer graphics

•
•
•

Suitable structure for rendering and display
Common graphics representations include quadric surfaces [Gotardo et
al., 2004], B-spline surfaces [Gregorski et al., 2000], and subdivision
surfaces
Not general enough to handle such a variety of features: flexibility,
adaption, noise-aware, … that are present in computer vision problems

(Left) Point cloud captured from a manufactured object (builder helmet).
(Right) 3D mesh generated from the captured point cloud (post-processed)

6/79
Introduction

Framework

Framework






Regional research project (GV/2011/034)

•

Title: “Visual surveillance systems for the identification and
characterization of anomalous behaviour” Project financed by the
Valencian Government in Spain

Regional research project (GRE09-16)

•

Title: “Visual surveillance systems for the identification and
characterization of anomalous behaviour in restricted environments
under temporal constraints” Project financed by the University of
Alicante in Spain

National research project (DPI2009-07144)

•

Title: “Cooperative Simultaneous Localization and Mapping (SLAM) in
large scale environments”

Research stay at IPAB – University of Edinburgh (BEFPI/2012/056)

•

Title: “Real-time 3D feature estimation and keypoint detection of
scenes using GPGPUs”
7/79
Introduction

Goals

Goals

 Proposal and validation of a 3D representation
model and a data fitting algorithm for noisy
point clouds

•
•
•
•

Deals with noisy data and outliers
Flexible
Dynamic (non stationary data)
Topology preserving

 An accelerated hardware implementation of the
proposed technique

•
•

Considerable speed-up regard CPU implementations
Real-time frame rates
8/79
Introduction

Goals

Goals (II)

 Validation of the proposed method on different
real computer vision problems handling 3D data:

•
•
•

Robot vision: 6DoF Egomotion

3D Object recognition
Computed-aided design/manufacturing (CAD/CAM)

 Integration of 3D data processing algorithms in
complex computer vision systems

•
•

Filtering, downsampling, normal estimation, feature
extraction, keypoint detection, matching, …
The use of a GPU as a general purpose processor
9/79
Introduction

Proposal

Proposal

 Growing Self-Organizing Maps (GSOM) for 3D
data representation

•
•
•

Low cost 3D sensors: noisy data
Time-constrained conditions
Applications: 3D computer vision problems

 Hardware-based implementation of the
proposed GSOM method

•

General Purpose computing on Graphics Processing
Units (GPGPU) paradigm

 Integrate the entire pipeline of 3D computer
vision systems using GPGPU paradigm

10/79
Index


Introduction



3D Representation using Growing Self-Organizing Maps






•
•
•
•

Review
3D Growing Neural Gas network

Experiments: Input space adaptation & normal estimation
Extensions of the GNG algorithm

Improving keypoint detection from noisy 3D observations

GPGPU Parallel Implementations
Applications
Conclusions

11/79
3D Representation GSOM

Review

Review


SOMs were originally proposed for data clustering and pattern
recognition purposes [Kohonen, 1982, Vesanto and Alhoniemi, 2000,
Dittenbach et al., 2001]




As the original model had some drawbacks due to the preestablished topology of the network, growing approaches were
proposed in order to deal with this problem
Growing Neural Gas network has been successfully applied to the
representation of 2D shapes in many computer vision problems
[Stergiopoulou and Papamarkos, 2006, García-Rodríguez et al., 2010,
Baena et al., 2013]



Already exist approaches that use traditional SOMs for 3D data
representation: [Yu, 1999, Junior et al., 2004]

•
•
•

Difficulties to correctly approximate concave structures
High computational cost
Synthetic data
12/79
3D Representation GSOM

Review

Review (II)


Moreover, there exist some limitations and unexplored
topics in the application of SOM-based methods to 3D
representation:

•

•
•
•
•

Majority of these works do not consider the high
computational cost of the learning step
Do not guarantee response within strict time
constraints
Assumed perfect point clouds that were noise-free
Data fusion (Geometric information + colour
information) has not been considered
Not dealing with point cloud sequences, only singleshot data
13/79
3D Representation GSOM

GNG network

3D Growing Neural Gas Network






Obtaining a reduced and compact representation of 3D data

•

Self Organizing Maps – Growing Neural Gas

Growing Neural Gas Algorithm (GNG) [Fritzke, 1995]

•
•

•
•

Incremental training algorithm
Links between the units in the network are established through Competitive
Hebbian Learning (CHL)
Topology Preserving Graph

Flexibility, growth, rapid adaption and good quality of representation.

GNG representation is comprised of nodes (neurons) and connections
(edges)

•

Wire-frame model

Initial, intermediate and final states of the GNG learning algorithm
14/79
3D Representation GSOM

GNG algorithm

GNG algorithm


Input data is defined in ℝ 𝒅

•



Adaption: Reconfiguration module

•



For 3D representation d=3

Random patterns are presented to
the network

Growth: It starts with two neurons
and new neurons are inserted



Flexibility: neurons and
connections may be removed during
the learning stage



This process is repeated until an
ending condition is fulfilled:

•
•

Number of neurons/patterns
Adaptation error threshold
Highly Parallelizable

15/79
3D Representation GSOM

Experiments: Data

Experiments: Data acquisition



Algorithm independent of the data source
We managed 3D data that can come from different sensors

•

Laser unit, a LMS-200 Sick mounted on a sweeping unit.
o Outdoor environments. Its range is 80 metres with an error of 1
millimetre per metre

•

Time-of-Flight camera
o SR4000 camera. It has a range of 5-10 metres

o The accuracy varies depending on the characteristics of the observed
scene, such as objects reflectivity and ambient lighting conditions
o Generation of point clouds during real time acquisition

•

Range camera: structured light, Microsot Kinect device
o RGB-D information. Indoor environments. Its range is from 0.8 to 6
metres
o Generation of point clouds during real time acquisition

16/79
3D Representation GSOM

Experiments: Data

Experiments: Data acquisition

3D sensors used for experiments. From left to right: Sick laser unit
LMS-200, Time-Of-Flight SR4000 camera and Microsoft Kinect

Mobile robots used for experiments.
Left: Magellan Pro unit used
for indoors.
Right: PowerBot used for outdoors.

17/79
3D Representation GSOM

Experiments: Data

Experiments: Data Sets

 Some public data sets have been used to validate
the proposed method:

•
•
•

Well known Stanford 3D scanning repository. It
contains complete models that have been previously
processed (noise free)
Dataset captured using the Kinect sensor. Released by
the Computer Vision Laboratory of the University of
Bologna [Tombari et al., 2010a]
Own dataset obtained using three previously
mentioned 3D sensors
18/79
3D Representation GSOM

Experiments: Data

Experiments: Data Sets


Blensor software: It allowed us to generate synthetic scenes and to
obtain partial views of the generated scene as if a Kinect device was
used

•

It provided us with ground truth information for experiments

Simulated scene

Simulated scene + Gaussian noise
19/79
3D Representation GSOM

Experiments

Experiments

 GNG method has been applied to 3D data
representation

•
•

Input space adaptation

Noise removal properties

 Extensions of the GNG based algorithm
•
•
•

Colour-GNG
Sequences management
3D Surface reconstruction

20/79
3D Representation GSOM

Exp: GNG 3D representation

Experiments: GNG 3D representation

Applying GNG to laser data

Applying GNG to Kinect data

Applying GNG to SR4000 data

21/79
3D Representation GSOM

Exp: GNG 3D representation

Experiments: GNG 3D representation

Applying GNG to Kinect data
22/79
3D Representation GSOM

Exp: Input space Adaptation

Experiments: Input space adaptation



GNG method obtains better adaptation to the input space than
other filtering methods like Voxel Grid technique
• Obtains lower adaptation (Mean Squared Error (MSE) )
• Tested on CAD models and simulated scenes
Lower error

Input space adaptation MSE for different models (metres). Voxel Grid versus
GNG. Numbers in bold provide the best results.
23/79
3D Representation GSOM

Exp: Input space Adaptation

Experiments: Input space adaptation (II)

Noisy model σ = 0.4

GNG representation

Original CAD model

Voxel grid representation

Filtering quality using 10,000 nodes. GNG vs Voxel Grid comparison
24/79
3D Representation GSOM

Exp: Normal estimation

Experiments: Normal estimation




Surface normals are important properties of a geometric surface,
and are heavily used in many areas such as computer vision and
computer graphics
Normal or curvature estimation can be affected by the presence of
noise

Normal estimation over noisy input data



Representation obtained using the GNG method allows to compute
more accurate normal information
25/79
3D Representation GSOM

Exp: Input space Adaptation

Experiments: Normal estimation (II)

Top:. Normal estimation on a filtered point cloud produced by the
GNG method. Bottom: Normal estimation on a raw point cloud.



Normals are considered more stable as their distribution is
smooth and also they have less abrupt changes in their directions
26/79
3D Representation GSOM

Extensions: Colour-GNG

Extension: Colour-GNG



Modern 3D sensors provide colour information (e.g.
Kinect, Carmine, Asus Xtion, … )
GNG is extended considering colour information during
the learning step

•
•
•
•

Input data is defined in ℝ 𝒅 where d = 6
Colour information is considered during the weight adaptation
step but it was not included in the CHL (winning neurons)
process
o We are still focus on topology preservation
Winning neuron step only compute Euclidean distance using
x,y,z components
No post-processing steps are required as neurons’ colour is
obtained during the learning process
27/79
3D Representation GSOM

Extensions: Colour-GNG

Extension: Colour-GNG (II)

(a),(b),(c) show original point clouds. (d),(e),(f) show downsampled point
clouds using the proposed method
28/79
3D Representation GSOM

Extensions: Colour-GNG

Extension: Colour-GNG (III)




Mario figure is down-sampled using the Colour-GNG method
Results are similar to those obtained with the colour
interpolation post-processing step
29/79
3D Representation GSOM

Extensions: Sequences

Extension: Sequences management





Extension of the GNG for
processing sequences of
point clouds
It is not required to restart
the learning
It provides a speed-up in
the runtime as neurons are
kept between point clouds
This extension was
applied in a mobile
robotics application
An improved workflow to manage point
cloud sequences using the GNG algorithm
30/79
3D Representation GSOM

Extension: 3D Reconstruction

Extension: 3D Surface Reconstruction



Three-dimensional surface reconstruction is not
considered in the original GNG algorithm as it only
generates wire-frame models
[Holdstein and Fischer, 2008, Do Rego et al., 2010,
Barhak, 2002] have already considered the creation of 3D
triangular faces modifying the original GNG algorithm

•



Post-processing steps are required for avoid gaps and holes in
the final mesh

We extended the CHL developing a method able to
produce full 3D meshes during the learning stage

•
•

No post-processing steps are required
A new learning scheme was developed
31/79
3D Representation GSOM

Extension: 3D Reconstruction

Extension: 3D Surface Reconstruction (II)


Avoid non-manifold and
overlapping edges

•
•




More than 2 neighbours
it is checked if the face to
be created already exist

A face is created whenever
the already existing edges
or the new ones form a
triangle
The neuron insertion
process was also modified
Considered situations for edge and face creation
during the extended CHL
32/79
3D Representation GSOM

Extension: 3D Reconstruction

Extension: 3D Surface Reconstruction (III)
Left: The triangle formed by these 3
neurons is close to a right triangle
Right: The edge connecting s1 and ni
is removed as the angle formed

Edge removal constraint based on the Tales sphere
Left: neuron insertion between the
neuron q with highest error and its
neighbour f with highest error.
Right: four new triangles and two
edges are created considering r, q
and f.
Face creation during the insertion of new neurons
33/79
3D Representation GSOM

Extension: 3D Reconstruction

Extension: 3D Surface Reconstruction (IV)

Different views of reconstructed models using an existing GNGbased method [Do Rego et al., 2010] for surface reconstruction



Post-processing steps were avoided causing gaps and holes in
the final 3D reconstructed models

34/79
3D Representation GSOM

Extension: 3D Reconstruction

Extension: 3D Surface Reconstruction (V)

Reconstructed models using our extended GNG method for face
reconstruction and without applying post-processing steps
35/79
3D Representation GSOM

Extension: 3D Reconstruction

Extension: 3D Surface Reconstruction (VI)
Top: 3D model of a person
(Kinect sensor).
Bottom: digitized foot. (foot
digitizer)

Left: Noisy point clouds captured
using the Kinect sensor.
Right: 3D reconstruction using the
proposed method.

36/79
Index








Introduction
3D Representation using Growing Self-Organizing Maps

Improving keypoint detection from noisy 3D observations

•
•
•
•

Review

Improving keypoint detection
Correspondences matching
Results

GPGPU Parallel Implementations
Applications
Conclusions

37/79
Improving Keypoint detection



Review

Review
Filtering and down-sampling have become essential steps in
3D data processing
General System Overview

Motivation: dealing with noisy
data obtained from 3D sensors
as the Microsoft Kinect or lasers



Result: Improving 3D
keypoint detection and
therefore registration
problem

We propose the use of the GNG algorithm for downsampling
and filtering 3D data

 Beneficial

attributes will be demonstrated through the 3D
registration problem
38/79
Improving Keypoint detection

Review

Review (II)
Registration: Aligning various 3D point cloud data views into a complete model

Pairwise matching
39/79
Improving Keypoint detection

Keypoint detection

3D Keypoint detection
 Applying keypoint detection algorithms to filtered point
clouds

 State-of-the-art 3D keypoint detectors
• Different techniques are used to test and measure the
improvement achieved using GNG method to filter
and downsample input data

40/79
Improving Keypoint detection

Keypoint detection

3D Keypoint detection (II)
 3D Keypoint detectors

• SIFT3D:

using depth as the intensity value in the
original SIFT algorithm

• Harris3D: use surface normals of 3D points
• Tomasi3D: performs eigenvalue decomposition over
covariance matrix

• Noble3D: evalutes the ratio between the determinant
and the trace of the covariance matrix

41/79
Improving Keypoint detection

Feature descriptors

3D Feature descriptors
 Feature descriptors are calculated over detected keypoints
to perform feature matching

•

•

FPH and FPFH: based on an histogram of the differences of angles
between the normals of the neighbour points
SHOT and CSHOT: a spherical grid centered on the point divides
the neighbourhood so that in each grid bin a weighted histogram
of normals is obtained

FPFH

CSHOT
42/79
Improving Keypoint detection

Feature matching

Feature matching (II)
 Correspondences between keypoints are validated
through RANSAC algorithm, rejecting those
inconsistent correspondences

43/79
Improving Keypoint detection

Results

Results: Feature matching






Correspondences
matching computed on
different input data
Top: raw point clouds
Middle: reduced
representation using the
GNG (20, 000 neurons)
Bottom: reduced
representation using the
GNG (10, 000 neurons)
RANSAC is used to
reject wrong matches

Raw 3D data

GNG 20,000 nodes

GNG 10,000 nodes

44/79
Improving Keypoint detection

Results

Results: Transformation errors
Lowest max errors
Lowest transformation error

*1

Mean, median, minimum and maximum RMS*2 errors of the estimated
transformations using different keypoint detectors. (metres).
*1 Uniform Sampling

*2 Root Mean Square - Transformation error

45/79
Index









Introduction
3D Representation using Growing Self-Organizing Maps
Improving keypoint detection from noisy 3D observations

GPGPU Parallel Implementations

•
•
•
•

Graphics Processing Unit
GPU-based implementation of the GNG algorithm
GPU-based tensor extraction algorithm

Conclusions

Applications
Conclusions

46/79
GPGPU Implementations

GPUs

Graphics Processing Unit
 GPUs have democratized High Performance
Computing (HPC)
• Massively parallel processors on a commodity PC
• Great ratio FLOP/€ compared with other solutions
 However, this is not for free
• New programming model
• Algorithms need to be re-thought and re-implemented
 Growing Neural Gas algorithm is
computationally expensive
• Most computer vision applications are time-constrained
• A GPGPU implementation is proposed
47/79
GPGPU Implementations

GPUs

Graphics Processing Unit (II)






More transistors for data processing
GPU are comprised of streaming multi-processors

High GPU Memory bandwidth
GPGPU: General Purpose computing on Graphics Processors Units
Key hardware feature is that the cores are SIMT

•

Single instruction multiple threads

G80 CUDA NVIDIAs Architecture

48/79
GPGPU Implementations

GPU Implementation GNG

GPU Implementation GNG
 Stages of the GNG algorithm
that are highly parallelizable

•
•
•
•

Calculate distance to neurons for
every pattern
Search winning neurons
Delete neurons and edges

Search neuron with max error

 Other improvements
•
•

Avoid memory transfers between
CPU and GPU
Hierarchy of memories
Highly parallelizable
stages

49/79
GPGPU Implementations

GPU Implementation GNG

Parallel Min/Max Reduction







A parallel Min/Max reduction
that computes the Min/Max of
large arrays of values (Neurons)
Strategy used to find Min/Max
winning neurons
Reduce linear computational
cost n of the sequential version
to the logarithmic cost log(n)
Provides better performance for
a large number of neurons

Example of Parallel Reduction Algorithm

Proposed version:
2MinParallelReduction

•

Extended version to obtain 2
minimum values in the same
number of steps
50/79
GPGPU Implementations

Experimental Setup

Experimental setup

 Main GNG Parameters:

•
•

~0-20,000 neurons and a maximum λ (entries per iteration) of
1,000-2,000

Others parameters have been fixed based on previous works
[García-Rodriguez et al., 2012]
o єw = 0.1 , єn = 0.001
o amax = 250, α = 0.5 , β = 0.0005

 Hardware
• GPUs: CUDA capable devices used in experiments

•

CPU: single thread and multiple thread implementations were
tested
o

Intel Core i3 540 3.07Ghz
51/79
GPGPU Implementations

Exp: 2MinParallelReduction

Experiments: 2MinParallelReduction

Runtime and speed-up using CPU and GPU implementations
52/79
GPGPU Implementations

Exp: GNG Runtime

Experiments: GNG learning runtime

GPU and CPU GNG runtime, and speed-up for different devices

53/79
GPGPU Implementations

Exp: Hybrid version

Experiments: Hybrid version


CPU implementation was faster for small network sizes

•
•

We developed an hybrid implementation

GPU version automatically starts computing when it is detected that
computing time is lower than the one obtained by the CPU

Example of CPU and Hybrid GNG runtime for different devices
54/79
GPGPU Implementations

GPU Feature extraction

GPU-based Tensor extraction algorithm
 Time-constrained 3D feature extraction
• Most feature descriptors cannot be computed online due to their
high computational complexity
o 3D Tensor - [Mian et al., 2006b]
o Geometric Histogram - [Hetzel et al., 2001]

•
•
•

Highly parallelizable
Geometrical properties
Invariant to linear
transformations

o Spin Images - [Andrew Johnson, 1997]

 An accelerated GPU-based implementation of an
existing 3D feature extraction algorithm is proposed

•

Accelerate entire pipeline of RGB-D based computer vision
systems

55/79
GPGPU Implementations

GPU Feature extraction

GPU-based Tensor extraction algorithm (II)





The surface area of the mesh intersecting each bin of the grid is the value
of the tensor element
As many threads as voxels are launched in parallel where each GPU
thread represent a voxel (bin) of the grid
Each thread computes the area of intersection between the mesh and its
corresponding voxel using Sutherland Hodgman’s polygon clipping
algorithm. [Foley et al., 1990]
56/79
GPGPU Implementations

Exp: performance

Experiments: Performance

Runtime comparison and speed-up obtained for proposed methods
using different graphics boards

57/79
Index









Introduction
3D Representation using Growing Self-Organizing Maps
Improving keypoint detection from noisy 3D observations
GPGPU Parallel Implementations

Applications

•
•
•

Robotics
Computer Vision

CAD/CAM

Conclusions

58/79
Applications

Exp: performance

Applications

 Different cases of study where the GNG-based method
proposed in this PhD thesis was applied to different
areas

•

Robot Vision
o

•

Computer Vision
o

•

6DoF Pose Registration

3D object recognition under cluttered conditions

CAD/CAM
o

Rapid Prototyping in Shoe Last Manufacturing

59/79
Applications

Robotics

Robotics: 6DoF pose registration
 The main goal of this application is to perform six degrees
of freedom (6DoF) pose registration in semi-structured
environments

•



We combined our accelerated GNG-based algorithm with
the method proposed in [Viejo and Cazorla, 2013]

•



Man-made indoor and outdoor environments

Planar patches extraction

It provides a good starting point for Simultaneous Location
and Mapping (SLAM)

 GNG was applied directly to raw 3D data
60/79
Applications

Robotics

Robotics: 6DoF pose registration (II)

Without GNG




GNG

Left: planar patches extracted from SR4000 camera
Right: filtered data using the GNG network: more planar patches are
extracted
61/79
Applications

Robotics

Robotics: 6DoF pose registration (III)

Robot trajectory

Without GNG




GNG

Planar based 6DoF pose registration results
Left image shows map building results without using GNG while the
results shown on the Right are obtained after computing a GNG mesh
62/79
Applications

3D Object recognition

3D Object Recognition
 The main goal of this
application is the recognition
of objects under time
constraints and cluttered
conditions

 The GPU-based of the semilocal surface feature (tensor)
is successfully used to
recognize objects in cluttered
scenes

 A library of models is
constructed offline, storing all
extracted 3D tensors in an
efficient way using a hash table

•

Multiple views
63/79
Applications

3D Object recognition

3D Object Recognition (II)

 Object recognition is
performed on scenes
with different level of
occlusion

 Objects are occluded by
other objects stored and
non-stored in the library

 The averaged
recognition rate was
84%, wrong matches
16% and false negatives
0%
64/79
Applications

3D Object recognition

3D Object Recognition (III)




GPU-based 3D feature implementation is successfully used in a 3D object
recognition application
Parallel matching is performed on the GPU: correlation function
Implemented prototype took around 800 ms with a GPU implementation to
perform 3D object recognition of the entire scene
Scene 2

65/79
Applications

CAD: Rapid Prototyping

Rapid Prototyping in Shoe Last Manufacturing

 With

the advent of CAD/CAM and rapid acquisition
devices it is possible to digitize old raised shoe lasts for
reusing them in the shoe last design software

Process to reconstruct existing shoe lasts and computing
topology preservation error regard the original CAD design
66/79
Applications

CAD: Rapid Prototyping

Rapid Prototyping in Shoe Last Manufacturing (II)



The main goal of this research is to obtain a grid of points that is
adapted to the topology of the footwear shoe last from a sequence
of sections with disorganized points acquired by sweeping an
optical laser digitizer

Typical sequence of sections of a shoe last.
Noisy data obtained from the digitizer
67/79
Applications

CAD: Rapid Prototyping

Rapid Prototyping in Shoe Last Manufacturing (III)

Voxel Grid versus GNG: Mean error
along different sections of the shoe last
68/79
Applications

CAD: Rapid Prototyping

Rapid Prototyping in Shoe Last Manufacturing (IV)

Input space
GNG nodes
VG nodes

GNG vs VG topological preservation comparison

3D Reconstruction GNG

3D Reconstruction VG

69/79
Index








Introduction
3D Representation using Growing Self-Organizing Maps
Improving keypoint detection from noisy 3D observations
GPGPU Parallel Implementations

Applications

Conclusions

•
•
•

Contributions

Future work
Publications

70/79
Conclusions

Contributions

Contributions

 Contributions made in the topic of research:
•

Proposal of a new method to create compact, reduced and
efficient 3D representations from noisy data

‐

Development of a GNG-based method capable to deal with
different sensors

‐
‐
‐
‐

Extension of the GNG algorithm to consider colour information

‐

GPU-based implementation to accelerate the learning process of
the GNG and NG algorithms.

‐

An hybrid implementation of the GNG algorithm that takes
advantage of the CPU and GPU processors

Extension of the GNG algorithm for 3D surface reconstruction
Sequences management
Integration of the proposed method in 3D keypoint detection
algorithms improving their performance

71/79
Conclusions

•

Contributions

Contributions (II)
Integration of 3D data processing algorithms in complex
computer vision systems:

‐
‐

Point cloud triangulation has been ported to the GPU
accelerating its runtime

‐

•

Normal estimation has been ported to the GPU considerably
decreasing its runtime

A GPU time-constrained implementation of a 3D feature
extraction algorithm

Application of the proposed method in various real
computer vision applications:

‐
‐

Robotics: Localization and mapping: 6DoF pose registration
Computer vision: 3D object recognition under cluttered
conditionsCAD/CAM: rapid prototyping in shoe last
manufacturing

72/79
Conclusions

Future work

Future work

 Other improvements on the GPU implementation of the
GNG algorithm:

•

•
•
•

Using multi-GPU to manage several neural networks
simultaneously
Distributed computing
Testing new architectures: Intel Xeon Phi [Fang et al., 2013a]
Generating random patterns using GPU

 More applications of the accelerated GNG algorithm will be
studied in the future

•
•

Clustering multi-dimensional data: Big Data
Medical Image Reconstruction

 Extension of the real-time implementation of the 3D tensor
•
•

Visual features extracted from RGB information
Improve implicit keypoint detector used by the 3D tensor
73/79
Conclusions

Publications

Publications

•

4 JCR Journal papers
o

“Real-time 3D semi-local surface patch extraction using GPGPU”
S. Orts-Escolano, V. Morell, J. Garcia-Rodriguez, M. Cazorla, R.B. Fisher; Journal of
Real-Time Image Processing. December 2013; ISSN: 1861-8219; Impact Factor: 1.156
(JCR 2012)

o

“GPGPU implementation of growing neural gas: Application to 3D
scene reconstruction”
S. Orts, J. García Rodríguez, D. Viejo, M. Cazorla, V. Morell; J.Parallel Distrib. Comput.
72(10); pp: 1361-1372 (2012); ISSN: 0743-7315; ImpactFactor: 1.135 (JCR 2011)

o

“3D-based reconstruction using growing neural gas landmark:
application to rapid prototyping in shoe last manufacturing”
A. Jimeno-Morenilla, J. García-Rodriguez, S. Orts-Escolano, M. Davia-Aracil; The
International Journal of Advanced Manufacturing Technology: May 2013. Vol 69. pp:
657-668; ISSN: 0268-3768; Impact Factor: 1.205 (JCR 2012)

o

“Autonomous Growing Neural Gas for applications with time
constraint: Optimal parameter estimation”
J. García Rodríguez, A. Angelopoulou, J. M. García Chamizo, A. Psarrou, S. OrtsEscolano, V. Morell-Giménez; Neural Networks 32: pp: 196-208 (2012), ISSN: 08936080; Impact Factor: 1.927 (JCR 2012)
74/79
Conclusions

Publications

Publications (II)

•

International conferences
o

“Point Light Source Estimation based on Scenes Recorded by a RGB-D
camera”
B. Boom, S. Orts-Escolano, X. Ning, S. McDonagh, P. Sandilands, R.B. Fisher; British
Machine Vision Conference, BMVC 2013, Bristol, UK. Rank B

o

“Point Cloud Data Filtering and Downsampling using Growing Neural
Gas”
S. Orts-Escolano, V. Morell, J. Garcia-Rodriguez and M. Cazorla; International Joint
Conference on Neural Networks, IJCNN 2013, Dallas, Texas. Rank A

o

“Natural User Interfaces in Volume Visualisation Using Microsoft
Kinect”
A. Angelopoulou, J. García Rodríguez, A.Psarrou, M. Mentzelopoulos, B. Reddy, S. OrtsEscolano, J.A. Serra. International Conference on Image Analysis and Processing,
ICIAP2013, Naples, Italy: 11-19. Rank B

o

“Improving Drug Discovery using a neural networks based parallel
scoring functions”
H. Perez-Sanchez, G. D. Guerrero, J. M. Garcia, J. Pena, J. M. Cecilia, G. Cano, S. OrtsEscolano and J. Garcia-Rodriguez. International Joint Conference on Neural Networks,
IJCNN 2013, Dallas, Texas. Rank A
75/79
Conclusions

Publications

Publications (III)

•

International conferences
o

“Improving 3D Keypoint Detection from Noisy Data Using Growing
Neural Gas”
J. García Rodríguez, M. Cazorla, S. Orts-Escolano, V. Morell. International Work-Conference
on Artificial Neural Networks, IWANN 2013, Puerto de la Cruz, Tenerife, Spain: 480-487.
Rank B

o

“3D Hand Pose Estimation with Neural Networks”
J. A. Serra, J. García Rodríguez, S. Orts-Escolano, J. M. García Chamizo, A. Angelopoulou, A.
Psarrou, M. Mentzelopoulos, J. Montoyo-Bojo, E. Domínguez. International WorkConference on Artificial Neural Networks, IWANN 2013, Puerto de la Cruz, Tenerife, Spain:
504-512. Rank B

o

“3D Gesture Recognition with Growing Neural Gas”
J. A. Serra-Perez, J. Garcia-Rodriguez, S. Orts-Escolano, J. M. Garcia-Chamizo, A.
Angelopoulou, A. Psarrou, M. Mentzeopoulos, J. Montoyo Bojo. International Joint
Conference on Neural Networks. IJCNN 2013, Dallas, Texas. Rank A

o

“Multi-GPU based camera network system keeps privacy using
Growing Neural Gas”
S. Orts-Escolano, J. García Rodríguez, V. Morell, J.Azorín López, J. M. García Chamizo.
International Joint Conference on Neural Networks (IJCNN) 2012, Brisbane, Australia, June:
1-8. Rank A
76/79
Conclusions

Publications

Publications (IV)

•

International conferences
o

“A study of registration techniques for 6DoF SLAM”
V. Morell, M.Cazorla, D. Viejo, S. Orts-Escolano, J. García Rodríguez. International
Conference of the Catalan Association for Artificial Intelligence, CCIA 2012, University of
Alacant, Spain: 111-120. Rank B

o

“Fast Autonomous Growing Neural Gas”
J. García Rodríguez, A. Angelopoulou, J. M. García Chamizo, A. Psarrou, S. Orts, V. Morell.
International Joint Conference on Neural Networks, IJCNN 2011, San Jose, California:
725-732. Rank A

o

“Fast Image Representation with GPU-Based Growing Neural Gas”
J.García Rodríguez, A. Angelopoulou, V. Morell, S. Orts, A. Psarrou, J. M. García Chamizo.
International Work-Conference on Artificial Neural Networks, IWANN 2011,
Torremolinos-Málaga, Spain: 58-65. Rank B

o

“Video and Image Processing with Self-Organizing Neural Networks”
J. García Rodríguez, E. Domínguez, A. Angelopoulou, A. Psarrou, F. J. Mora-Gimeno, S.
Orts, J. M. García Chamizo. International Work-Conference on Artificial Neural Networks,
IWANN 2011, Torremolinos-Málaga, Spain: 98-104. Rank B
77/79
Conclusions

Publications

Publications (V)

•

National conferences
o

“Procesamiento de múltiples flujos de datos con Growing Neural Gas
sobre Multi-GPU”

S. Orts-Escolano, J. García-Rodríguez, V. Morell-Giménez. Jornadas de Paralelismo JP,
Elche, España, 2012

•

Book chapters
o

“A Review of Registration Methods on Mobile Robots”

V. Morell-Gimenez, S. Orts-Escolano, J. García Rodríguez, M. Cazorla, D. Viejo. Robotic
Vision: Technologies for Machine Learning and Vision Applications. IGI GLOBAL

o

“Computer Vision Applications of Self-Organizing Neural Networks”

J. García-Rodríguez, J. M. García-Chamizo, S. Orts-Escolano, V. Morell-Gimenez,
J.Serra-Perez, A. Angelolopoulou, M. Cazorla, D. Viejo. Robotic Vision: Technologies
for Machine Learning and Vision Applications. IGI GLOBAL

•

Poster presentations
o

“6DoF pose estimation using Growing Neural Gas Network”

S.Orts, J. Garcia-Rodriguez, D. Viejo, M. Cazorla, V. Morell, J. Serra. 5th International
Conference on Cognitive Systems, Cogsys 2012, TU Vienna, Austria

o

“GPU Accelerated Growing Neural Gas Network”

S. Orts, J. Garcia, V.Morell. Programming and Tuning Massively Parallel Systems, PUMPS
2011,Barcelona, Spain. (Honorable Mention by NVIDIA)
78/79
This presentation is licensed under a Creative Commons AttributionNonCommercial-ShareAlike 4.0 International License.

79/79
A Three-Dimensional
Representation method for
Noisy Point Clouds based on
Growing Self-Organizing Maps
accelerated on GPUs
Author:

Sergio Orts Escolano

Supervisors: Dr. José García Rodríguez
Dr. Miguel Ángel Cazorla Quevedo
Doctoral programme in technologies for the information society

Más contenido relacionado

La actualidad más candente

“Efficient Deep Learning for 3D Point Cloud Understanding,” a Presentation fr...
“Efficient Deep Learning for 3D Point Cloud Understanding,” a Presentation fr...“Efficient Deep Learning for 3D Point Cloud Understanding,” a Presentation fr...
“Efficient Deep Learning for 3D Point Cloud Understanding,” a Presentation fr...Edge AI and Vision Alliance
 
Teleimmersion
TeleimmersionTeleimmersion
Teleimmersionstudent
 
Point Cloud Stream on Spatial Mixed Reality: Toward Telepresence in Architect...
Point Cloud Stream on Spatial Mixed Reality: Toward Telepresence in Architect...Point Cloud Stream on Spatial Mixed Reality: Toward Telepresence in Architect...
Point Cloud Stream on Spatial Mixed Reality: Toward Telepresence in Architect...Tomohiro Fukuda
 
Semantic Mapping of Road Scenes
Semantic Mapping of Road ScenesSemantic Mapping of Road Scenes
Semantic Mapping of Road ScenesSunando Sengupta
 
Build Your Own 3D Scanner: The Mathematics of 3D Triangulation
Build Your Own 3D Scanner: The Mathematics of 3D TriangulationBuild Your Own 3D Scanner: The Mathematics of 3D Triangulation
Build Your Own 3D Scanner: The Mathematics of 3D TriangulationDouglas Lanman
 
Urban 3D Semantic Modelling Using Stereo Vision, ICRA 2013
Urban 3D Semantic Modelling Using Stereo Vision, ICRA 2013Urban 3D Semantic Modelling Using Stereo Vision, ICRA 2013
Urban 3D Semantic Modelling Using Stereo Vision, ICRA 2013Sunando Sengupta
 
Build Your Own 3D Scanner: Surface Reconstruction
Build Your Own 3D Scanner: Surface ReconstructionBuild Your Own 3D Scanner: Surface Reconstruction
Build Your Own 3D Scanner: Surface ReconstructionDouglas Lanman
 
Automatic Dense Semantic Mapping From Visual Street-level Imagery
Automatic Dense Semantic Mapping From Visual Street-level ImageryAutomatic Dense Semantic Mapping From Visual Street-level Imagery
Automatic Dense Semantic Mapping From Visual Street-level ImagerySunando Sengupta
 
PR098: MegaDepth: Learning Single-View Depth Prediction from Internet Photos
PR098: MegaDepth: Learning Single-View Depth Prediction from Internet PhotosPR098: MegaDepth: Learning Single-View Depth Prediction from Internet Photos
PR098: MegaDepth: Learning Single-View Depth Prediction from Internet Photos광희 이
 
3D Image visualization
3D Image visualization3D Image visualization
3D Image visualizationalok ray
 
Data Challenges with 3D Computer Vision
Data Challenges with 3D Computer VisionData Challenges with 3D Computer Vision
Data Challenges with 3D Computer VisionMartin Scholl
 
ICRA 2015 interactive presentation
ICRA 2015 interactive presentationICRA 2015 interactive presentation
ICRA 2015 interactive presentationSunando Sengupta
 
Lecture 1 for Digital Image Processing (2nd Edition)
Lecture 1 for Digital Image Processing (2nd Edition)Lecture 1 for Digital Image Processing (2nd Edition)
Lecture 1 for Digital Image Processing (2nd Edition)Moe Moe Myint
 
Build Your Own 3D Scanner: Course Notes
Build Your Own 3D Scanner: Course NotesBuild Your Own 3D Scanner: Course Notes
Build Your Own 3D Scanner: Course NotesDouglas Lanman
 
IRJET - Dehazing of Single Nighttime Haze Image using Superpixel Method
IRJET -  	  Dehazing of Single Nighttime Haze Image using Superpixel MethodIRJET -  	  Dehazing of Single Nighttime Haze Image using Superpixel Method
IRJET - Dehazing of Single Nighttime Haze Image using Superpixel MethodIRJET Journal
 
The single image dehazing based on efficient transmission estimation
The single image dehazing based on efficient transmission estimationThe single image dehazing based on efficient transmission estimation
The single image dehazing based on efficient transmission estimationAVVENIRE TECHNOLOGIES
 

La actualidad más candente (20)

“Efficient Deep Learning for 3D Point Cloud Understanding,” a Presentation fr...
“Efficient Deep Learning for 3D Point Cloud Understanding,” a Presentation fr...“Efficient Deep Learning for 3D Point Cloud Understanding,” a Presentation fr...
“Efficient Deep Learning for 3D Point Cloud Understanding,” a Presentation fr...
 
Teleimmersion
TeleimmersionTeleimmersion
Teleimmersion
 
Point Cloud Stream on Spatial Mixed Reality: Toward Telepresence in Architect...
Point Cloud Stream on Spatial Mixed Reality: Toward Telepresence in Architect...Point Cloud Stream on Spatial Mixed Reality: Toward Telepresence in Architect...
Point Cloud Stream on Spatial Mixed Reality: Toward Telepresence in Architect...
 
Semantic Mapping of Road Scenes
Semantic Mapping of Road ScenesSemantic Mapping of Road Scenes
Semantic Mapping of Road Scenes
 
Build Your Own 3D Scanner: The Mathematics of 3D Triangulation
Build Your Own 3D Scanner: The Mathematics of 3D TriangulationBuild Your Own 3D Scanner: The Mathematics of 3D Triangulation
Build Your Own 3D Scanner: The Mathematics of 3D Triangulation
 
3d scanning techniques
3d scanning techniques3d scanning techniques
3d scanning techniques
 
Urban 3D Semantic Modelling Using Stereo Vision, ICRA 2013
Urban 3D Semantic Modelling Using Stereo Vision, ICRA 2013Urban 3D Semantic Modelling Using Stereo Vision, ICRA 2013
Urban 3D Semantic Modelling Using Stereo Vision, ICRA 2013
 
Build Your Own 3D Scanner: Surface Reconstruction
Build Your Own 3D Scanner: Surface ReconstructionBuild Your Own 3D Scanner: Surface Reconstruction
Build Your Own 3D Scanner: Surface Reconstruction
 
Automatic Dense Semantic Mapping From Visual Street-level Imagery
Automatic Dense Semantic Mapping From Visual Street-level ImageryAutomatic Dense Semantic Mapping From Visual Street-level Imagery
Automatic Dense Semantic Mapping From Visual Street-level Imagery
 
PR098: MegaDepth: Learning Single-View Depth Prediction from Internet Photos
PR098: MegaDepth: Learning Single-View Depth Prediction from Internet PhotosPR098: MegaDepth: Learning Single-View Depth Prediction from Internet Photos
PR098: MegaDepth: Learning Single-View Depth Prediction from Internet Photos
 
3D Image visualization
3D Image visualization3D Image visualization
3D Image visualization
 
Data Challenges with 3D Computer Vision
Data Challenges with 3D Computer VisionData Challenges with 3D Computer Vision
Data Challenges with 3D Computer Vision
 
Chap1
Chap1Chap1
Chap1
 
ICRA 2015 interactive presentation
ICRA 2015 interactive presentationICRA 2015 interactive presentation
ICRA 2015 interactive presentation
 
Lecture 1 for Digital Image Processing (2nd Edition)
Lecture 1 for Digital Image Processing (2nd Edition)Lecture 1 for Digital Image Processing (2nd Edition)
Lecture 1 for Digital Image Processing (2nd Edition)
 
3D scanner using kinect
3D scanner using kinect3D scanner using kinect
3D scanner using kinect
 
Build Your Own 3D Scanner: Course Notes
Build Your Own 3D Scanner: Course NotesBuild Your Own 3D Scanner: Course Notes
Build Your Own 3D Scanner: Course Notes
 
IRJET - Dehazing of Single Nighttime Haze Image using Superpixel Method
IRJET -  	  Dehazing of Single Nighttime Haze Image using Superpixel MethodIRJET -  	  Dehazing of Single Nighttime Haze Image using Superpixel Method
IRJET - Dehazing of Single Nighttime Haze Image using Superpixel Method
 
The single image dehazing based on efficient transmission estimation
The single image dehazing based on efficient transmission estimationThe single image dehazing based on efficient transmission estimation
The single image dehazing based on efficient transmission estimation
 
PCL (Point Cloud Library)
PCL (Point Cloud Library)PCL (Point Cloud Library)
PCL (Point Cloud Library)
 

Destacado

Modelado basado en imágenes
Modelado basado en imágenesModelado basado en imágenes
Modelado basado en imágenesMario Rodriguez
 
Crime Scene Diagramming and Reconstruction by Det. Mike Anderson
Crime Scene Diagramming and Reconstruction by Det. Mike AndersonCrime Scene Diagramming and Reconstruction by Det. Mike Anderson
Crime Scene Diagramming and Reconstruction by Det. Mike AndersonPPI_Group
 
Shape from Distortion - 3D Digitization
Shape from Distortion - 3D DigitizationShape from Distortion - 3D Digitization
Shape from Distortion - 3D DigitizationVanya Valindria
 
Lecture 01 frank dellaert - 3 d reconstruction and mapping: a factor graph ...
Lecture 01   frank dellaert - 3 d reconstruction and mapping: a factor graph ...Lecture 01   frank dellaert - 3 d reconstruction and mapping: a factor graph ...
Lecture 01 frank dellaert - 3 d reconstruction and mapping: a factor graph ...mustafa sarac
 
Programación 3D y Modelado de Realidad Virtual para Internet con VRML 2.0
Programación 3D y Modelado de Realidad Virtual para Internet con VRML 2.0Programación 3D y Modelado de Realidad Virtual para Internet con VRML 2.0
Programación 3D y Modelado de Realidad Virtual para Internet con VRML 2.0Stephenson Prieto
 
Ar techniques@sergi grau
Ar techniques@sergi grauAr techniques@sergi grau
Ar techniques@sergi grauSergi Grau
 
Overview of 3D GIS Capabilties
Overview of 3D GIS CapabiltiesOverview of 3D GIS Capabilties
Overview of 3D GIS CapabiltiesErik Van Der Zee
 
Inside Matters - 3D X-Ray Microscopy - Software - Octopus Imaging
Inside Matters - 3D X-Ray Microscopy - Software - Octopus ImagingInside Matters - 3D X-Ray Microscopy - Software - Octopus Imaging
Inside Matters - 3D X-Ray Microscopy - Software - Octopus ImagingLeiv Hendrickx
 
3D Scanning Technology Overview: Kinect Reconstruction Algorithms Explained
3D Scanning Technology Overview: Kinect Reconstruction Algorithms Explained3D Scanning Technology Overview: Kinect Reconstruction Algorithms Explained
3D Scanning Technology Overview: Kinect Reconstruction Algorithms ExplainedVoxelMetric
 
3D CT Middle and Inner Ear
3D CT Middle and Inner Ear3D CT Middle and Inner Ear
3D CT Middle and Inner EarDr. Paulose
 
Inside Matters - 3D X-Ray Microscopy - Services
Inside Matters - 3D X-Ray Microscopy - ServicesInside Matters - 3D X-Ray Microscopy - Services
Inside Matters - 3D X-Ray Microscopy - ServicesLeiv Hendrickx
 
Pixie Dust - SIGGGRAPH 2014
Pixie Dust - SIGGGRAPH 2014Pixie Dust - SIGGGRAPH 2014
Pixie Dust - SIGGGRAPH 2014Yoichi Ochiai
 
Low-cost data-driven 3D reconstruction and its applications @ 6th ICE 3D Body...
Low-cost data-driven 3D reconstruction and its applications @ 6th ICE 3D Body...Low-cost data-driven 3D reconstruction and its applications @ 6th ICE 3D Body...
Low-cost data-driven 3D reconstruction and its applications @ 6th ICE 3D Body...Alfredo BALLESTER FERNÁNDEZ
 
Técnicas de ingeniería inversa para diseño producto
Técnicas de ingeniería inversa para diseño productoTécnicas de ingeniería inversa para diseño producto
Técnicas de ingeniería inversa para diseño productoDiseño e Ingeniería
 
Ejercicios oferta demanda
Ejercicios oferta demandaEjercicios oferta demanda
Ejercicios oferta demandajsande
 
Build Your Own 3D Scanner: Introduction
Build Your Own 3D Scanner: IntroductionBuild Your Own 3D Scanner: Introduction
Build Your Own 3D Scanner: IntroductionDouglas Lanman
 
Graficas en 2 d y 3d matlab
Graficas en 2 d y 3d matlabGraficas en 2 d y 3d matlab
Graficas en 2 d y 3d matlabJuan Ete
 

Destacado (18)

Modelado basado en imágenes
Modelado basado en imágenesModelado basado en imágenes
Modelado basado en imágenes
 
Crime Scene Diagramming and Reconstruction by Det. Mike Anderson
Crime Scene Diagramming and Reconstruction by Det. Mike AndersonCrime Scene Diagramming and Reconstruction by Det. Mike Anderson
Crime Scene Diagramming and Reconstruction by Det. Mike Anderson
 
Shape from Distortion - 3D Digitization
Shape from Distortion - 3D DigitizationShape from Distortion - 3D Digitization
Shape from Distortion - 3D Digitization
 
Lecture 01 frank dellaert - 3 d reconstruction and mapping: a factor graph ...
Lecture 01   frank dellaert - 3 d reconstruction and mapping: a factor graph ...Lecture 01   frank dellaert - 3 d reconstruction and mapping: a factor graph ...
Lecture 01 frank dellaert - 3 d reconstruction and mapping: a factor graph ...
 
Programación 3D y Modelado de Realidad Virtual para Internet con VRML 2.0
Programación 3D y Modelado de Realidad Virtual para Internet con VRML 2.0Programación 3D y Modelado de Realidad Virtual para Internet con VRML 2.0
Programación 3D y Modelado de Realidad Virtual para Internet con VRML 2.0
 
Acosutic Trail, GPS manos libres
Acosutic Trail, GPS manos libresAcosutic Trail, GPS manos libres
Acosutic Trail, GPS manos libres
 
Ar techniques@sergi grau
Ar techniques@sergi grauAr techniques@sergi grau
Ar techniques@sergi grau
 
Overview of 3D GIS Capabilties
Overview of 3D GIS CapabiltiesOverview of 3D GIS Capabilties
Overview of 3D GIS Capabilties
 
Inside Matters - 3D X-Ray Microscopy - Software - Octopus Imaging
Inside Matters - 3D X-Ray Microscopy - Software - Octopus ImagingInside Matters - 3D X-Ray Microscopy - Software - Octopus Imaging
Inside Matters - 3D X-Ray Microscopy - Software - Octopus Imaging
 
3D Scanning Technology Overview: Kinect Reconstruction Algorithms Explained
3D Scanning Technology Overview: Kinect Reconstruction Algorithms Explained3D Scanning Technology Overview: Kinect Reconstruction Algorithms Explained
3D Scanning Technology Overview: Kinect Reconstruction Algorithms Explained
 
3D CT Middle and Inner Ear
3D CT Middle and Inner Ear3D CT Middle and Inner Ear
3D CT Middle and Inner Ear
 
Inside Matters - 3D X-Ray Microscopy - Services
Inside Matters - 3D X-Ray Microscopy - ServicesInside Matters - 3D X-Ray Microscopy - Services
Inside Matters - 3D X-Ray Microscopy - Services
 
Pixie Dust - SIGGGRAPH 2014
Pixie Dust - SIGGGRAPH 2014Pixie Dust - SIGGGRAPH 2014
Pixie Dust - SIGGGRAPH 2014
 
Low-cost data-driven 3D reconstruction and its applications @ 6th ICE 3D Body...
Low-cost data-driven 3D reconstruction and its applications @ 6th ICE 3D Body...Low-cost data-driven 3D reconstruction and its applications @ 6th ICE 3D Body...
Low-cost data-driven 3D reconstruction and its applications @ 6th ICE 3D Body...
 
Técnicas de ingeniería inversa para diseño producto
Técnicas de ingeniería inversa para diseño productoTécnicas de ingeniería inversa para diseño producto
Técnicas de ingeniería inversa para diseño producto
 
Ejercicios oferta demanda
Ejercicios oferta demandaEjercicios oferta demanda
Ejercicios oferta demanda
 
Build Your Own 3D Scanner: Introduction
Build Your Own 3D Scanner: IntroductionBuild Your Own 3D Scanner: Introduction
Build Your Own 3D Scanner: Introduction
 
Graficas en 2 d y 3d matlab
Graficas en 2 d y 3d matlabGraficas en 2 d y 3d matlab
Graficas en 2 d y 3d matlab
 

Similar a A Three-Dimensional Representation method for Noisy Point Clouds based on Growing Self-Organizing Maps accelerated on GPUs

"High-resolution 3D Reconstruction on a Mobile Processor," a Presentation fro...
"High-resolution 3D Reconstruction on a Mobile Processor," a Presentation fro..."High-resolution 3D Reconstruction on a Mobile Processor," a Presentation fro...
"High-resolution 3D Reconstruction on a Mobile Processor," a Presentation fro...Edge AI and Vision Alliance
 
The unknown spatial quality of dense point clouds derived from stereo images
The unknown spatial quality of dense point clouds derived from stereo imagesThe unknown spatial quality of dense point clouds derived from stereo images
The unknown spatial quality of dense point clouds derived from stereo imagesieeepondy
 
TRAFFIC MANAGEMENT THROUGH SATELLITE IMAGING-- Part 3
TRAFFIC MANAGEMENT THROUGH SATELLITE IMAGING-- Part 3TRAFFIC MANAGEMENT THROUGH SATELLITE IMAGING-- Part 3
TRAFFIC MANAGEMENT THROUGH SATELLITE IMAGING-- Part 3NanubalaDhruvan
 
From Sense to Print: Towards Automatic 3D Printing from 3D Sensing Devices
From Sense to Print: Towards Automatic 3D Printing from 3D Sensing DevicesFrom Sense to Print: Towards Automatic 3D Printing from 3D Sensing Devices
From Sense to Print: Towards Automatic 3D Printing from 3D Sensing Devicestoukaigi
 
Desktop Softwares for Unmanned Aerial Systems(UAS))
Desktop Softwares for Unmanned Aerial Systems(UAS))Desktop Softwares for Unmanned Aerial Systems(UAS))
Desktop Softwares for Unmanned Aerial Systems(UAS))Kamal Shahi
 
Real Time Object Dectection using machine learning
Real Time Object Dectection using machine learningReal Time Object Dectection using machine learning
Real Time Object Dectection using machine learningpratik pratyay
 
A Wireless Network Infrastructure Architecture for Rural Communities
A Wireless Network Infrastructure Architecture for Rural CommunitiesA Wireless Network Infrastructure Architecture for Rural Communities
A Wireless Network Infrastructure Architecture for Rural CommunitiesAIRCC Publishing Corporation
 
Complete End-to-End Low Cost Solution to a 3D Scanning System with Integrate...
 Complete End-to-End Low Cost Solution to a 3D Scanning System with Integrate... Complete End-to-End Low Cost Solution to a 3D Scanning System with Integrate...
Complete End-to-End Low Cost Solution to a 3D Scanning System with Integrate...AIRCC Publishing Corporation
 
Complete End-to-End Low Cost Solution to a 3D Scanning System with Integrated...
Complete End-to-End Low Cost Solution to a 3D Scanning System with Integrated...Complete End-to-End Low Cost Solution to a 3D Scanning System with Integrated...
Complete End-to-End Low Cost Solution to a 3D Scanning System with Integrated...AIRCC Publishing Corporation
 
COMPLETE END-TO-END LOW COST SOLUTION TO A 3D SCANNING SYSTEM WITH INTEGRATED...
COMPLETE END-TO-END LOW COST SOLUTION TO A 3D SCANNING SYSTEM WITH INTEGRATED...COMPLETE END-TO-END LOW COST SOLUTION TO A 3D SCANNING SYSTEM WITH INTEGRATED...
COMPLETE END-TO-END LOW COST SOLUTION TO A 3D SCANNING SYSTEM WITH INTEGRATED...ijcsit
 
Bl32821831
Bl32821831Bl32821831
Bl32821831IJMER
 
Development of 3D convolutional neural network to recognize human activities ...
Development of 3D convolutional neural network to recognize human activities ...Development of 3D convolutional neural network to recognize human activities ...
Development of 3D convolutional neural network to recognize human activities ...journalBEEI
 
Semantic Segmentation on Satellite Imagery
Semantic Segmentation on Satellite ImagerySemantic Segmentation on Satellite Imagery
Semantic Segmentation on Satellite ImageryRAHUL BHOJWANI
 
CrowdMap: Accurate Reconstruction of Indoor Floor Plan from Crowdsourced Sens...
CrowdMap: Accurate Reconstruction of Indoor Floor Plan from Crowdsourced Sens...CrowdMap: Accurate Reconstruction of Indoor Floor Plan from Crowdsourced Sens...
CrowdMap: Accurate Reconstruction of Indoor Floor Plan from Crowdsourced Sens...Si Chen
 
IRJET- 3D Object Recognition of Car Image Detection
IRJET-  	  3D Object Recognition of Car Image DetectionIRJET-  	  3D Object Recognition of Car Image Detection
IRJET- 3D Object Recognition of Car Image DetectionIRJET Journal
 
Tracking Chessboard Corners Using Projective Transformation for Augmented Rea...
Tracking Chessboard Corners Using Projective Transformation for Augmented Rea...Tracking Chessboard Corners Using Projective Transformation for Augmented Rea...
Tracking Chessboard Corners Using Projective Transformation for Augmented Rea...CSCJournals
 
Improving AI surveillance using Edge Computing
Improving AI surveillance using Edge ComputingImproving AI surveillance using Edge Computing
Improving AI surveillance using Edge ComputingIRJET Journal
 

Similar a A Three-Dimensional Representation method for Noisy Point Clouds based on Growing Self-Organizing Maps accelerated on GPUs (20)

"High-resolution 3D Reconstruction on a Mobile Processor," a Presentation fro...
"High-resolution 3D Reconstruction on a Mobile Processor," a Presentation fro..."High-resolution 3D Reconstruction on a Mobile Processor," a Presentation fro...
"High-resolution 3D Reconstruction on a Mobile Processor," a Presentation fro...
 
The unknown spatial quality of dense point clouds derived from stereo images
The unknown spatial quality of dense point clouds derived from stereo imagesThe unknown spatial quality of dense point clouds derived from stereo images
The unknown spatial quality of dense point clouds derived from stereo images
 
TRAFFIC MANAGEMENT THROUGH SATELLITE IMAGING-- Part 3
TRAFFIC MANAGEMENT THROUGH SATELLITE IMAGING-- Part 3TRAFFIC MANAGEMENT THROUGH SATELLITE IMAGING-- Part 3
TRAFFIC MANAGEMENT THROUGH SATELLITE IMAGING-- Part 3
 
From Sense to Print: Towards Automatic 3D Printing from 3D Sensing Devices
From Sense to Print: Towards Automatic 3D Printing from 3D Sensing DevicesFrom Sense to Print: Towards Automatic 3D Printing from 3D Sensing Devices
From Sense to Print: Towards Automatic 3D Printing from 3D Sensing Devices
 
Fpga human detection
Fpga human detectionFpga human detection
Fpga human detection
 
Desktop Softwares for Unmanned Aerial Systems(UAS))
Desktop Softwares for Unmanned Aerial Systems(UAS))Desktop Softwares for Unmanned Aerial Systems(UAS))
Desktop Softwares for Unmanned Aerial Systems(UAS))
 
Real Time Object Dectection using machine learning
Real Time Object Dectection using machine learningReal Time Object Dectection using machine learning
Real Time Object Dectection using machine learning
 
A Wireless Network Infrastructure Architecture for Rural Communities
A Wireless Network Infrastructure Architecture for Rural CommunitiesA Wireless Network Infrastructure Architecture for Rural Communities
A Wireless Network Infrastructure Architecture for Rural Communities
 
Complete End-to-End Low Cost Solution to a 3D Scanning System with Integrate...
 Complete End-to-End Low Cost Solution to a 3D Scanning System with Integrate... Complete End-to-End Low Cost Solution to a 3D Scanning System with Integrate...
Complete End-to-End Low Cost Solution to a 3D Scanning System with Integrate...
 
Complete End-to-End Low Cost Solution to a 3D Scanning System with Integrated...
Complete End-to-End Low Cost Solution to a 3D Scanning System with Integrated...Complete End-to-End Low Cost Solution to a 3D Scanning System with Integrated...
Complete End-to-End Low Cost Solution to a 3D Scanning System with Integrated...
 
COMPLETE END-TO-END LOW COST SOLUTION TO A 3D SCANNING SYSTEM WITH INTEGRATED...
COMPLETE END-TO-END LOW COST SOLUTION TO A 3D SCANNING SYSTEM WITH INTEGRATED...COMPLETE END-TO-END LOW COST SOLUTION TO A 3D SCANNING SYSTEM WITH INTEGRATED...
COMPLETE END-TO-END LOW COST SOLUTION TO A 3D SCANNING SYSTEM WITH INTEGRATED...
 
slide-171212080528.pptx
slide-171212080528.pptxslide-171212080528.pptx
slide-171212080528.pptx
 
Bl32821831
Bl32821831Bl32821831
Bl32821831
 
Development of 3D convolutional neural network to recognize human activities ...
Development of 3D convolutional neural network to recognize human activities ...Development of 3D convolutional neural network to recognize human activities ...
Development of 3D convolutional neural network to recognize human activities ...
 
Semantic Segmentation on Satellite Imagery
Semantic Segmentation on Satellite ImagerySemantic Segmentation on Satellite Imagery
Semantic Segmentation on Satellite Imagery
 
CrowdMap: Accurate Reconstruction of Indoor Floor Plan from Crowdsourced Sens...
CrowdMap: Accurate Reconstruction of Indoor Floor Plan from Crowdsourced Sens...CrowdMap: Accurate Reconstruction of Indoor Floor Plan from Crowdsourced Sens...
CrowdMap: Accurate Reconstruction of Indoor Floor Plan from Crowdsourced Sens...
 
IRJET- 3D Object Recognition of Car Image Detection
IRJET-  	  3D Object Recognition of Car Image DetectionIRJET-  	  3D Object Recognition of Car Image Detection
IRJET- 3D Object Recognition of Car Image Detection
 
Tracking Chessboard Corners Using Projective Transformation for Augmented Rea...
Tracking Chessboard Corners Using Projective Transformation for Augmented Rea...Tracking Chessboard Corners Using Projective Transformation for Augmented Rea...
Tracking Chessboard Corners Using Projective Transformation for Augmented Rea...
 
Gi3511181122
Gi3511181122Gi3511181122
Gi3511181122
 
Improving AI surveillance using Edge Computing
Improving AI surveillance using Edge ComputingImproving AI surveillance using Edge Computing
Improving AI surveillance using Edge Computing
 

Último

What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii SoldatenkoFwdays
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piececharlottematthew16
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostZilliz
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsMiki Katsuragi
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Patryk Bandurski
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyAlfredo García Lavilla
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clashcharlottematthew16
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfRankYa
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 
The Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfThe Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfSeasiaInfotech2
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxNavinnSomaal
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Enterprise Knowledge
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 

Último (20)

What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piece
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering Tips
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easy
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clash
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdf
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 
The Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdfThe Future of Software Development - Devin AI Innovative Approach.pdf
The Future of Software Development - Devin AI Innovative Approach.pdf
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptx
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 

A Three-Dimensional Representation method for Noisy Point Clouds based on Growing Self-Organizing Maps accelerated on GPUs

  • 1. A Three-Dimensional Representation method for Noisy Point Clouds based on Growing Self-Organizing Maps accelerated on GPUs Author: Sergio Orts Escolano Supervisors: Dr. José García Rodríguez Dr. Miguel Ángel Cazorla Quevedo Doctoral programme in technologies for the information society
  • 2. Outline       Introduction 3D Representation using Growing Self-Organizing Maps Improving keypoint detection from noisy 3D observations GPGPU parallel implementations Applications Conclusions 2/79
  • 3.  Introduction • • • •      Index Motivation Framework Goals Proposal 3D representation using growing self-organizing maps Improving keypoint detection from noisy 3D observations GPGPU parallel Implementations Applications Conclusions 3/79
  • 4. Introduction Motivation Motivation  Most computer vision problems require the use of an effective way of representation • • Graphs, regions of interest (ROI), B-Splines, Octrees, histograms, … Key step for later processing stages: feature extraction, feature matching, classification, keypoint detection, … 4/79
  • 5. Introduction Motivation Motivation (II)     3D data captured from the real world • Implicitly is comprised of complex structures and nonlinear models The advent of low cost 3D sensors • • • E.g. Microsoft Kinect, Asus Xtion, PrimeSense Carmine, … RGB and Depth (RGB-D) streams (25 frames per second (fps)) High levels of noise and outliers Only few works in 3D computer vision deal with real-time constraints • 3D data processing algorithms are computationally expensive Finding a 3D model with different features: • • • • Rapid adaptation Good quality of representation (topology preservation) Flexibility (non-stationary data) Robust to noisy data and outliers 5/79
  • 6. Introduction  Motivation Motivation (III) 3D models of objects and scenes have been extensively used in computer graphics • • • Suitable structure for rendering and display Common graphics representations include quadric surfaces [Gotardo et al., 2004], B-spline surfaces [Gregorski et al., 2000], and subdivision surfaces Not general enough to handle such a variety of features: flexibility, adaption, noise-aware, … that are present in computer vision problems (Left) Point cloud captured from a manufactured object (builder helmet). (Right) 3D mesh generated from the captured point cloud (post-processed) 6/79
  • 7. Introduction Framework Framework     Regional research project (GV/2011/034) • Title: “Visual surveillance systems for the identification and characterization of anomalous behaviour” Project financed by the Valencian Government in Spain Regional research project (GRE09-16) • Title: “Visual surveillance systems for the identification and characterization of anomalous behaviour in restricted environments under temporal constraints” Project financed by the University of Alicante in Spain National research project (DPI2009-07144) • Title: “Cooperative Simultaneous Localization and Mapping (SLAM) in large scale environments” Research stay at IPAB – University of Edinburgh (BEFPI/2012/056) • Title: “Real-time 3D feature estimation and keypoint detection of scenes using GPGPUs” 7/79
  • 8. Introduction Goals Goals  Proposal and validation of a 3D representation model and a data fitting algorithm for noisy point clouds • • • • Deals with noisy data and outliers Flexible Dynamic (non stationary data) Topology preserving  An accelerated hardware implementation of the proposed technique • • Considerable speed-up regard CPU implementations Real-time frame rates 8/79
  • 9. Introduction Goals Goals (II)  Validation of the proposed method on different real computer vision problems handling 3D data: • • • Robot vision: 6DoF Egomotion 3D Object recognition Computed-aided design/manufacturing (CAD/CAM)  Integration of 3D data processing algorithms in complex computer vision systems • • Filtering, downsampling, normal estimation, feature extraction, keypoint detection, matching, … The use of a GPU as a general purpose processor 9/79
  • 10. Introduction Proposal Proposal  Growing Self-Organizing Maps (GSOM) for 3D data representation • • • Low cost 3D sensors: noisy data Time-constrained conditions Applications: 3D computer vision problems  Hardware-based implementation of the proposed GSOM method • General Purpose computing on Graphics Processing Units (GPGPU) paradigm  Integrate the entire pipeline of 3D computer vision systems using GPGPU paradigm 10/79
  • 11. Index  Introduction  3D Representation using Growing Self-Organizing Maps     • • • • Review 3D Growing Neural Gas network Experiments: Input space adaptation & normal estimation Extensions of the GNG algorithm Improving keypoint detection from noisy 3D observations GPGPU Parallel Implementations Applications Conclusions 11/79
  • 12. 3D Representation GSOM Review Review  SOMs were originally proposed for data clustering and pattern recognition purposes [Kohonen, 1982, Vesanto and Alhoniemi, 2000, Dittenbach et al., 2001]   As the original model had some drawbacks due to the preestablished topology of the network, growing approaches were proposed in order to deal with this problem Growing Neural Gas network has been successfully applied to the representation of 2D shapes in many computer vision problems [Stergiopoulou and Papamarkos, 2006, García-Rodríguez et al., 2010, Baena et al., 2013]  Already exist approaches that use traditional SOMs for 3D data representation: [Yu, 1999, Junior et al., 2004] • • • Difficulties to correctly approximate concave structures High computational cost Synthetic data 12/79
  • 13. 3D Representation GSOM Review Review (II)  Moreover, there exist some limitations and unexplored topics in the application of SOM-based methods to 3D representation: • • • • • Majority of these works do not consider the high computational cost of the learning step Do not guarantee response within strict time constraints Assumed perfect point clouds that were noise-free Data fusion (Geometric information + colour information) has not been considered Not dealing with point cloud sequences, only singleshot data 13/79
  • 14. 3D Representation GSOM GNG network 3D Growing Neural Gas Network    Obtaining a reduced and compact representation of 3D data • Self Organizing Maps – Growing Neural Gas Growing Neural Gas Algorithm (GNG) [Fritzke, 1995] • • • • Incremental training algorithm Links between the units in the network are established through Competitive Hebbian Learning (CHL) Topology Preserving Graph Flexibility, growth, rapid adaption and good quality of representation. GNG representation is comprised of nodes (neurons) and connections (edges) • Wire-frame model Initial, intermediate and final states of the GNG learning algorithm 14/79
  • 15. 3D Representation GSOM GNG algorithm GNG algorithm  Input data is defined in ℝ 𝒅 •  Adaption: Reconfiguration module •  For 3D representation d=3 Random patterns are presented to the network Growth: It starts with two neurons and new neurons are inserted  Flexibility: neurons and connections may be removed during the learning stage  This process is repeated until an ending condition is fulfilled: • • Number of neurons/patterns Adaptation error threshold Highly Parallelizable 15/79
  • 16. 3D Representation GSOM Experiments: Data Experiments: Data acquisition   Algorithm independent of the data source We managed 3D data that can come from different sensors • Laser unit, a LMS-200 Sick mounted on a sweeping unit. o Outdoor environments. Its range is 80 metres with an error of 1 millimetre per metre • Time-of-Flight camera o SR4000 camera. It has a range of 5-10 metres o The accuracy varies depending on the characteristics of the observed scene, such as objects reflectivity and ambient lighting conditions o Generation of point clouds during real time acquisition • Range camera: structured light, Microsot Kinect device o RGB-D information. Indoor environments. Its range is from 0.8 to 6 metres o Generation of point clouds during real time acquisition 16/79
  • 17. 3D Representation GSOM Experiments: Data Experiments: Data acquisition 3D sensors used for experiments. From left to right: Sick laser unit LMS-200, Time-Of-Flight SR4000 camera and Microsoft Kinect Mobile robots used for experiments. Left: Magellan Pro unit used for indoors. Right: PowerBot used for outdoors. 17/79
  • 18. 3D Representation GSOM Experiments: Data Experiments: Data Sets  Some public data sets have been used to validate the proposed method: • • • Well known Stanford 3D scanning repository. It contains complete models that have been previously processed (noise free) Dataset captured using the Kinect sensor. Released by the Computer Vision Laboratory of the University of Bologna [Tombari et al., 2010a] Own dataset obtained using three previously mentioned 3D sensors 18/79
  • 19. 3D Representation GSOM Experiments: Data Experiments: Data Sets  Blensor software: It allowed us to generate synthetic scenes and to obtain partial views of the generated scene as if a Kinect device was used • It provided us with ground truth information for experiments Simulated scene Simulated scene + Gaussian noise 19/79
  • 20. 3D Representation GSOM Experiments Experiments  GNG method has been applied to 3D data representation • • Input space adaptation Noise removal properties  Extensions of the GNG based algorithm • • • Colour-GNG Sequences management 3D Surface reconstruction 20/79
  • 21. 3D Representation GSOM Exp: GNG 3D representation Experiments: GNG 3D representation Applying GNG to laser data Applying GNG to Kinect data Applying GNG to SR4000 data 21/79
  • 22. 3D Representation GSOM Exp: GNG 3D representation Experiments: GNG 3D representation Applying GNG to Kinect data 22/79
  • 23. 3D Representation GSOM Exp: Input space Adaptation Experiments: Input space adaptation  GNG method obtains better adaptation to the input space than other filtering methods like Voxel Grid technique • Obtains lower adaptation (Mean Squared Error (MSE) ) • Tested on CAD models and simulated scenes Lower error Input space adaptation MSE for different models (metres). Voxel Grid versus GNG. Numbers in bold provide the best results. 23/79
  • 24. 3D Representation GSOM Exp: Input space Adaptation Experiments: Input space adaptation (II) Noisy model σ = 0.4 GNG representation Original CAD model Voxel grid representation Filtering quality using 10,000 nodes. GNG vs Voxel Grid comparison 24/79
  • 25. 3D Representation GSOM Exp: Normal estimation Experiments: Normal estimation   Surface normals are important properties of a geometric surface, and are heavily used in many areas such as computer vision and computer graphics Normal or curvature estimation can be affected by the presence of noise Normal estimation over noisy input data  Representation obtained using the GNG method allows to compute more accurate normal information 25/79
  • 26. 3D Representation GSOM Exp: Input space Adaptation Experiments: Normal estimation (II) Top:. Normal estimation on a filtered point cloud produced by the GNG method. Bottom: Normal estimation on a raw point cloud.  Normals are considered more stable as their distribution is smooth and also they have less abrupt changes in their directions 26/79
  • 27. 3D Representation GSOM Extensions: Colour-GNG Extension: Colour-GNG   Modern 3D sensors provide colour information (e.g. Kinect, Carmine, Asus Xtion, … ) GNG is extended considering colour information during the learning step • • • • Input data is defined in ℝ 𝒅 where d = 6 Colour information is considered during the weight adaptation step but it was not included in the CHL (winning neurons) process o We are still focus on topology preservation Winning neuron step only compute Euclidean distance using x,y,z components No post-processing steps are required as neurons’ colour is obtained during the learning process 27/79
  • 28. 3D Representation GSOM Extensions: Colour-GNG Extension: Colour-GNG (II) (a),(b),(c) show original point clouds. (d),(e),(f) show downsampled point clouds using the proposed method 28/79
  • 29. 3D Representation GSOM Extensions: Colour-GNG Extension: Colour-GNG (III)   Mario figure is down-sampled using the Colour-GNG method Results are similar to those obtained with the colour interpolation post-processing step 29/79
  • 30. 3D Representation GSOM Extensions: Sequences Extension: Sequences management     Extension of the GNG for processing sequences of point clouds It is not required to restart the learning It provides a speed-up in the runtime as neurons are kept between point clouds This extension was applied in a mobile robotics application An improved workflow to manage point cloud sequences using the GNG algorithm 30/79
  • 31. 3D Representation GSOM Extension: 3D Reconstruction Extension: 3D Surface Reconstruction   Three-dimensional surface reconstruction is not considered in the original GNG algorithm as it only generates wire-frame models [Holdstein and Fischer, 2008, Do Rego et al., 2010, Barhak, 2002] have already considered the creation of 3D triangular faces modifying the original GNG algorithm •  Post-processing steps are required for avoid gaps and holes in the final mesh We extended the CHL developing a method able to produce full 3D meshes during the learning stage • • No post-processing steps are required A new learning scheme was developed 31/79
  • 32. 3D Representation GSOM Extension: 3D Reconstruction Extension: 3D Surface Reconstruction (II)  Avoid non-manifold and overlapping edges • •   More than 2 neighbours it is checked if the face to be created already exist A face is created whenever the already existing edges or the new ones form a triangle The neuron insertion process was also modified Considered situations for edge and face creation during the extended CHL 32/79
  • 33. 3D Representation GSOM Extension: 3D Reconstruction Extension: 3D Surface Reconstruction (III) Left: The triangle formed by these 3 neurons is close to a right triangle Right: The edge connecting s1 and ni is removed as the angle formed Edge removal constraint based on the Tales sphere Left: neuron insertion between the neuron q with highest error and its neighbour f with highest error. Right: four new triangles and two edges are created considering r, q and f. Face creation during the insertion of new neurons 33/79
  • 34. 3D Representation GSOM Extension: 3D Reconstruction Extension: 3D Surface Reconstruction (IV) Different views of reconstructed models using an existing GNGbased method [Do Rego et al., 2010] for surface reconstruction  Post-processing steps were avoided causing gaps and holes in the final 3D reconstructed models 34/79
  • 35. 3D Representation GSOM Extension: 3D Reconstruction Extension: 3D Surface Reconstruction (V) Reconstructed models using our extended GNG method for face reconstruction and without applying post-processing steps 35/79
  • 36. 3D Representation GSOM Extension: 3D Reconstruction Extension: 3D Surface Reconstruction (VI) Top: 3D model of a person (Kinect sensor). Bottom: digitized foot. (foot digitizer) Left: Noisy point clouds captured using the Kinect sensor. Right: 3D reconstruction using the proposed method. 36/79
  • 37. Index       Introduction 3D Representation using Growing Self-Organizing Maps Improving keypoint detection from noisy 3D observations • • • • Review Improving keypoint detection Correspondences matching Results GPGPU Parallel Implementations Applications Conclusions 37/79
  • 38. Improving Keypoint detection  Review Review Filtering and down-sampling have become essential steps in 3D data processing General System Overview Motivation: dealing with noisy data obtained from 3D sensors as the Microsoft Kinect or lasers  Result: Improving 3D keypoint detection and therefore registration problem We propose the use of the GNG algorithm for downsampling and filtering 3D data  Beneficial attributes will be demonstrated through the 3D registration problem 38/79
  • 39. Improving Keypoint detection Review Review (II) Registration: Aligning various 3D point cloud data views into a complete model Pairwise matching 39/79
  • 40. Improving Keypoint detection Keypoint detection 3D Keypoint detection  Applying keypoint detection algorithms to filtered point clouds  State-of-the-art 3D keypoint detectors • Different techniques are used to test and measure the improvement achieved using GNG method to filter and downsample input data 40/79
  • 41. Improving Keypoint detection Keypoint detection 3D Keypoint detection (II)  3D Keypoint detectors • SIFT3D: using depth as the intensity value in the original SIFT algorithm • Harris3D: use surface normals of 3D points • Tomasi3D: performs eigenvalue decomposition over covariance matrix • Noble3D: evalutes the ratio between the determinant and the trace of the covariance matrix 41/79
  • 42. Improving Keypoint detection Feature descriptors 3D Feature descriptors  Feature descriptors are calculated over detected keypoints to perform feature matching • • FPH and FPFH: based on an histogram of the differences of angles between the normals of the neighbour points SHOT and CSHOT: a spherical grid centered on the point divides the neighbourhood so that in each grid bin a weighted histogram of normals is obtained FPFH CSHOT 42/79
  • 43. Improving Keypoint detection Feature matching Feature matching (II)  Correspondences between keypoints are validated through RANSAC algorithm, rejecting those inconsistent correspondences 43/79
  • 44. Improving Keypoint detection Results Results: Feature matching      Correspondences matching computed on different input data Top: raw point clouds Middle: reduced representation using the GNG (20, 000 neurons) Bottom: reduced representation using the GNG (10, 000 neurons) RANSAC is used to reject wrong matches Raw 3D data GNG 20,000 nodes GNG 10,000 nodes 44/79
  • 45. Improving Keypoint detection Results Results: Transformation errors Lowest max errors Lowest transformation error *1 Mean, median, minimum and maximum RMS*2 errors of the estimated transformations using different keypoint detectors. (metres). *1 Uniform Sampling *2 Root Mean Square - Transformation error 45/79
  • 46. Index       Introduction 3D Representation using Growing Self-Organizing Maps Improving keypoint detection from noisy 3D observations GPGPU Parallel Implementations • • • • Graphics Processing Unit GPU-based implementation of the GNG algorithm GPU-based tensor extraction algorithm Conclusions Applications Conclusions 46/79
  • 47. GPGPU Implementations GPUs Graphics Processing Unit  GPUs have democratized High Performance Computing (HPC) • Massively parallel processors on a commodity PC • Great ratio FLOP/€ compared with other solutions  However, this is not for free • New programming model • Algorithms need to be re-thought and re-implemented  Growing Neural Gas algorithm is computationally expensive • Most computer vision applications are time-constrained • A GPGPU implementation is proposed 47/79
  • 48. GPGPU Implementations GPUs Graphics Processing Unit (II)      More transistors for data processing GPU are comprised of streaming multi-processors High GPU Memory bandwidth GPGPU: General Purpose computing on Graphics Processors Units Key hardware feature is that the cores are SIMT • Single instruction multiple threads G80 CUDA NVIDIAs Architecture 48/79
  • 49. GPGPU Implementations GPU Implementation GNG GPU Implementation GNG  Stages of the GNG algorithm that are highly parallelizable • • • • Calculate distance to neurons for every pattern Search winning neurons Delete neurons and edges Search neuron with max error  Other improvements • • Avoid memory transfers between CPU and GPU Hierarchy of memories Highly parallelizable stages 49/79
  • 50. GPGPU Implementations GPU Implementation GNG Parallel Min/Max Reduction      A parallel Min/Max reduction that computes the Min/Max of large arrays of values (Neurons) Strategy used to find Min/Max winning neurons Reduce linear computational cost n of the sequential version to the logarithmic cost log(n) Provides better performance for a large number of neurons Example of Parallel Reduction Algorithm Proposed version: 2MinParallelReduction • Extended version to obtain 2 minimum values in the same number of steps 50/79
  • 51. GPGPU Implementations Experimental Setup Experimental setup  Main GNG Parameters: • • ~0-20,000 neurons and a maximum λ (entries per iteration) of 1,000-2,000 Others parameters have been fixed based on previous works [García-Rodriguez et al., 2012] o єw = 0.1 , єn = 0.001 o amax = 250, α = 0.5 , β = 0.0005  Hardware • GPUs: CUDA capable devices used in experiments • CPU: single thread and multiple thread implementations were tested o Intel Core i3 540 3.07Ghz 51/79
  • 52. GPGPU Implementations Exp: 2MinParallelReduction Experiments: 2MinParallelReduction Runtime and speed-up using CPU and GPU implementations 52/79
  • 53. GPGPU Implementations Exp: GNG Runtime Experiments: GNG learning runtime GPU and CPU GNG runtime, and speed-up for different devices 53/79
  • 54. GPGPU Implementations Exp: Hybrid version Experiments: Hybrid version  CPU implementation was faster for small network sizes • • We developed an hybrid implementation GPU version automatically starts computing when it is detected that computing time is lower than the one obtained by the CPU Example of CPU and Hybrid GNG runtime for different devices 54/79
  • 55. GPGPU Implementations GPU Feature extraction GPU-based Tensor extraction algorithm  Time-constrained 3D feature extraction • Most feature descriptors cannot be computed online due to their high computational complexity o 3D Tensor - [Mian et al., 2006b] o Geometric Histogram - [Hetzel et al., 2001] • • • Highly parallelizable Geometrical properties Invariant to linear transformations o Spin Images - [Andrew Johnson, 1997]  An accelerated GPU-based implementation of an existing 3D feature extraction algorithm is proposed • Accelerate entire pipeline of RGB-D based computer vision systems 55/79
  • 56. GPGPU Implementations GPU Feature extraction GPU-based Tensor extraction algorithm (II)    The surface area of the mesh intersecting each bin of the grid is the value of the tensor element As many threads as voxels are launched in parallel where each GPU thread represent a voxel (bin) of the grid Each thread computes the area of intersection between the mesh and its corresponding voxel using Sutherland Hodgman’s polygon clipping algorithm. [Foley et al., 1990] 56/79
  • 57. GPGPU Implementations Exp: performance Experiments: Performance Runtime comparison and speed-up obtained for proposed methods using different graphics boards 57/79
  • 58. Index       Introduction 3D Representation using Growing Self-Organizing Maps Improving keypoint detection from noisy 3D observations GPGPU Parallel Implementations Applications • • • Robotics Computer Vision CAD/CAM Conclusions 58/79
  • 59. Applications Exp: performance Applications  Different cases of study where the GNG-based method proposed in this PhD thesis was applied to different areas • Robot Vision o • Computer Vision o • 6DoF Pose Registration 3D object recognition under cluttered conditions CAD/CAM o Rapid Prototyping in Shoe Last Manufacturing 59/79
  • 60. Applications Robotics Robotics: 6DoF pose registration  The main goal of this application is to perform six degrees of freedom (6DoF) pose registration in semi-structured environments •  We combined our accelerated GNG-based algorithm with the method proposed in [Viejo and Cazorla, 2013] •  Man-made indoor and outdoor environments Planar patches extraction It provides a good starting point for Simultaneous Location and Mapping (SLAM)  GNG was applied directly to raw 3D data 60/79
  • 61. Applications Robotics Robotics: 6DoF pose registration (II) Without GNG   GNG Left: planar patches extracted from SR4000 camera Right: filtered data using the GNG network: more planar patches are extracted 61/79
  • 62. Applications Robotics Robotics: 6DoF pose registration (III) Robot trajectory Without GNG   GNG Planar based 6DoF pose registration results Left image shows map building results without using GNG while the results shown on the Right are obtained after computing a GNG mesh 62/79
  • 63. Applications 3D Object recognition 3D Object Recognition  The main goal of this application is the recognition of objects under time constraints and cluttered conditions  The GPU-based of the semilocal surface feature (tensor) is successfully used to recognize objects in cluttered scenes  A library of models is constructed offline, storing all extracted 3D tensors in an efficient way using a hash table • Multiple views 63/79
  • 64. Applications 3D Object recognition 3D Object Recognition (II)  Object recognition is performed on scenes with different level of occlusion  Objects are occluded by other objects stored and non-stored in the library  The averaged recognition rate was 84%, wrong matches 16% and false negatives 0% 64/79
  • 65. Applications 3D Object recognition 3D Object Recognition (III)    GPU-based 3D feature implementation is successfully used in a 3D object recognition application Parallel matching is performed on the GPU: correlation function Implemented prototype took around 800 ms with a GPU implementation to perform 3D object recognition of the entire scene Scene 2 65/79
  • 66. Applications CAD: Rapid Prototyping Rapid Prototyping in Shoe Last Manufacturing  With the advent of CAD/CAM and rapid acquisition devices it is possible to digitize old raised shoe lasts for reusing them in the shoe last design software Process to reconstruct existing shoe lasts and computing topology preservation error regard the original CAD design 66/79
  • 67. Applications CAD: Rapid Prototyping Rapid Prototyping in Shoe Last Manufacturing (II)  The main goal of this research is to obtain a grid of points that is adapted to the topology of the footwear shoe last from a sequence of sections with disorganized points acquired by sweeping an optical laser digitizer Typical sequence of sections of a shoe last. Noisy data obtained from the digitizer 67/79
  • 68. Applications CAD: Rapid Prototyping Rapid Prototyping in Shoe Last Manufacturing (III) Voxel Grid versus GNG: Mean error along different sections of the shoe last 68/79
  • 69. Applications CAD: Rapid Prototyping Rapid Prototyping in Shoe Last Manufacturing (IV) Input space GNG nodes VG nodes GNG vs VG topological preservation comparison 3D Reconstruction GNG 3D Reconstruction VG 69/79
  • 70. Index       Introduction 3D Representation using Growing Self-Organizing Maps Improving keypoint detection from noisy 3D observations GPGPU Parallel Implementations Applications Conclusions • • • Contributions Future work Publications 70/79
  • 71. Conclusions Contributions Contributions  Contributions made in the topic of research: • Proposal of a new method to create compact, reduced and efficient 3D representations from noisy data ‐ Development of a GNG-based method capable to deal with different sensors ‐ ‐ ‐ ‐ Extension of the GNG algorithm to consider colour information ‐ GPU-based implementation to accelerate the learning process of the GNG and NG algorithms. ‐ An hybrid implementation of the GNG algorithm that takes advantage of the CPU and GPU processors Extension of the GNG algorithm for 3D surface reconstruction Sequences management Integration of the proposed method in 3D keypoint detection algorithms improving their performance 71/79
  • 72. Conclusions • Contributions Contributions (II) Integration of 3D data processing algorithms in complex computer vision systems: ‐ ‐ Point cloud triangulation has been ported to the GPU accelerating its runtime ‐ • Normal estimation has been ported to the GPU considerably decreasing its runtime A GPU time-constrained implementation of a 3D feature extraction algorithm Application of the proposed method in various real computer vision applications: ‐ ‐ Robotics: Localization and mapping: 6DoF pose registration Computer vision: 3D object recognition under cluttered conditionsCAD/CAM: rapid prototyping in shoe last manufacturing 72/79
  • 73. Conclusions Future work Future work  Other improvements on the GPU implementation of the GNG algorithm: • • • • Using multi-GPU to manage several neural networks simultaneously Distributed computing Testing new architectures: Intel Xeon Phi [Fang et al., 2013a] Generating random patterns using GPU  More applications of the accelerated GNG algorithm will be studied in the future • • Clustering multi-dimensional data: Big Data Medical Image Reconstruction  Extension of the real-time implementation of the 3D tensor • • Visual features extracted from RGB information Improve implicit keypoint detector used by the 3D tensor 73/79
  • 74. Conclusions Publications Publications • 4 JCR Journal papers o “Real-time 3D semi-local surface patch extraction using GPGPU” S. Orts-Escolano, V. Morell, J. Garcia-Rodriguez, M. Cazorla, R.B. Fisher; Journal of Real-Time Image Processing. December 2013; ISSN: 1861-8219; Impact Factor: 1.156 (JCR 2012) o “GPGPU implementation of growing neural gas: Application to 3D scene reconstruction” S. Orts, J. García Rodríguez, D. Viejo, M. Cazorla, V. Morell; J.Parallel Distrib. Comput. 72(10); pp: 1361-1372 (2012); ISSN: 0743-7315; ImpactFactor: 1.135 (JCR 2011) o “3D-based reconstruction using growing neural gas landmark: application to rapid prototyping in shoe last manufacturing” A. Jimeno-Morenilla, J. García-Rodriguez, S. Orts-Escolano, M. Davia-Aracil; The International Journal of Advanced Manufacturing Technology: May 2013. Vol 69. pp: 657-668; ISSN: 0268-3768; Impact Factor: 1.205 (JCR 2012) o “Autonomous Growing Neural Gas for applications with time constraint: Optimal parameter estimation” J. García Rodríguez, A. Angelopoulou, J. M. García Chamizo, A. Psarrou, S. OrtsEscolano, V. Morell-Giménez; Neural Networks 32: pp: 196-208 (2012), ISSN: 08936080; Impact Factor: 1.927 (JCR 2012) 74/79
  • 75. Conclusions Publications Publications (II) • International conferences o “Point Light Source Estimation based on Scenes Recorded by a RGB-D camera” B. Boom, S. Orts-Escolano, X. Ning, S. McDonagh, P. Sandilands, R.B. Fisher; British Machine Vision Conference, BMVC 2013, Bristol, UK. Rank B o “Point Cloud Data Filtering and Downsampling using Growing Neural Gas” S. Orts-Escolano, V. Morell, J. Garcia-Rodriguez and M. Cazorla; International Joint Conference on Neural Networks, IJCNN 2013, Dallas, Texas. Rank A o “Natural User Interfaces in Volume Visualisation Using Microsoft Kinect” A. Angelopoulou, J. García Rodríguez, A.Psarrou, M. Mentzelopoulos, B. Reddy, S. OrtsEscolano, J.A. Serra. International Conference on Image Analysis and Processing, ICIAP2013, Naples, Italy: 11-19. Rank B o “Improving Drug Discovery using a neural networks based parallel scoring functions” H. Perez-Sanchez, G. D. Guerrero, J. M. Garcia, J. Pena, J. M. Cecilia, G. Cano, S. OrtsEscolano and J. Garcia-Rodriguez. International Joint Conference on Neural Networks, IJCNN 2013, Dallas, Texas. Rank A 75/79
  • 76. Conclusions Publications Publications (III) • International conferences o “Improving 3D Keypoint Detection from Noisy Data Using Growing Neural Gas” J. García Rodríguez, M. Cazorla, S. Orts-Escolano, V. Morell. International Work-Conference on Artificial Neural Networks, IWANN 2013, Puerto de la Cruz, Tenerife, Spain: 480-487. Rank B o “3D Hand Pose Estimation with Neural Networks” J. A. Serra, J. García Rodríguez, S. Orts-Escolano, J. M. García Chamizo, A. Angelopoulou, A. Psarrou, M. Mentzelopoulos, J. Montoyo-Bojo, E. Domínguez. International WorkConference on Artificial Neural Networks, IWANN 2013, Puerto de la Cruz, Tenerife, Spain: 504-512. Rank B o “3D Gesture Recognition with Growing Neural Gas” J. A. Serra-Perez, J. Garcia-Rodriguez, S. Orts-Escolano, J. M. Garcia-Chamizo, A. Angelopoulou, A. Psarrou, M. Mentzeopoulos, J. Montoyo Bojo. International Joint Conference on Neural Networks. IJCNN 2013, Dallas, Texas. Rank A o “Multi-GPU based camera network system keeps privacy using Growing Neural Gas” S. Orts-Escolano, J. García Rodríguez, V. Morell, J.Azorín López, J. M. García Chamizo. International Joint Conference on Neural Networks (IJCNN) 2012, Brisbane, Australia, June: 1-8. Rank A 76/79
  • 77. Conclusions Publications Publications (IV) • International conferences o “A study of registration techniques for 6DoF SLAM” V. Morell, M.Cazorla, D. Viejo, S. Orts-Escolano, J. García Rodríguez. International Conference of the Catalan Association for Artificial Intelligence, CCIA 2012, University of Alacant, Spain: 111-120. Rank B o “Fast Autonomous Growing Neural Gas” J. García Rodríguez, A. Angelopoulou, J. M. García Chamizo, A. Psarrou, S. Orts, V. Morell. International Joint Conference on Neural Networks, IJCNN 2011, San Jose, California: 725-732. Rank A o “Fast Image Representation with GPU-Based Growing Neural Gas” J.García Rodríguez, A. Angelopoulou, V. Morell, S. Orts, A. Psarrou, J. M. García Chamizo. International Work-Conference on Artificial Neural Networks, IWANN 2011, Torremolinos-Málaga, Spain: 58-65. Rank B o “Video and Image Processing with Self-Organizing Neural Networks” J. García Rodríguez, E. Domínguez, A. Angelopoulou, A. Psarrou, F. J. Mora-Gimeno, S. Orts, J. M. García Chamizo. International Work-Conference on Artificial Neural Networks, IWANN 2011, Torremolinos-Málaga, Spain: 98-104. Rank B 77/79
  • 78. Conclusions Publications Publications (V) • National conferences o “Procesamiento de múltiples flujos de datos con Growing Neural Gas sobre Multi-GPU” S. Orts-Escolano, J. García-Rodríguez, V. Morell-Giménez. Jornadas de Paralelismo JP, Elche, España, 2012 • Book chapters o “A Review of Registration Methods on Mobile Robots” V. Morell-Gimenez, S. Orts-Escolano, J. García Rodríguez, M. Cazorla, D. Viejo. Robotic Vision: Technologies for Machine Learning and Vision Applications. IGI GLOBAL o “Computer Vision Applications of Self-Organizing Neural Networks” J. García-Rodríguez, J. M. García-Chamizo, S. Orts-Escolano, V. Morell-Gimenez, J.Serra-Perez, A. Angelolopoulou, M. Cazorla, D. Viejo. Robotic Vision: Technologies for Machine Learning and Vision Applications. IGI GLOBAL • Poster presentations o “6DoF pose estimation using Growing Neural Gas Network” S.Orts, J. Garcia-Rodriguez, D. Viejo, M. Cazorla, V. Morell, J. Serra. 5th International Conference on Cognitive Systems, Cogsys 2012, TU Vienna, Austria o “GPU Accelerated Growing Neural Gas Network” S. Orts, J. Garcia, V.Morell. Programming and Tuning Massively Parallel Systems, PUMPS 2011,Barcelona, Spain. (Honorable Mention by NVIDIA) 78/79
  • 79. This presentation is licensed under a Creative Commons AttributionNonCommercial-ShareAlike 4.0 International License. 79/79
  • 80. A Three-Dimensional Representation method for Noisy Point Clouds based on Growing Self-Organizing Maps accelerated on GPUs Author: Sergio Orts Escolano Supervisors: Dr. José García Rodríguez Dr. Miguel Ángel Cazorla Quevedo Doctoral programme in technologies for the information society