Seminar Presentation @Jacobs University, 02 March 2011.
Based on A. Zavodny, P. Flynn, X. Chen. Region Extraction in Large-Scale Urban LIDAR Data. In 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops, pages 1801-1808.
Beginners Guide to TikTok for Search - Rachel Pearson - We are Tilt __ Bright...
Region Extraction in Large-Scale Urban LIDAR Data
1. Region Extraction in Large-Scale Urban LIDAR Data Topics in Machine Vision Spring 2011 Alexandru Tandrau alexandru@tandrau.com
2. Input: Large 3D point cloud of urban environment, acquired by moving vehicles equipped with SICK LIDAR scanners, GPS-registered. Output: Point cloud labeling by identifying planar regions
6. Best Fitting Plane Covariance Matrix gist Compute Eigenvalues Compute Eigenvector v for smallest eigenvalue Equation of a plane Identify arguments of plane equation with eigenvector gist Tip The GSL (Gnu Scientific Library) contains, among others, eigenvalue/eigenvector libraries for C++.
9. ε-neighborhood with maximum planarity Find points to add to the region Update region’s planar approximation repeat for other neighborhoods Compute Planarity Measures Grow Regions Combine Regions Prune Regions
10. Regions Ri and Rj describe same surface if: Normals of regions point in similar directions (Tnra) The regions are close together Identify connected components on the graph formed by valid region pairs Compute Planarity Measures Grow Regions Combine Regions Prune Regions
11. Compute Planarity Measures Grow Regions Combine Regions Prune Regions Prune regions whose average planarity measure is above a threshold Tapm.
14. Experimental Results Campus dataset, 95 mil. range-scanned points Split data into subsets (based on acquire timestamp), process subsets independently Execution time for the entire dataset 21.4 minutes, ± 80% accuracy.
16. ? Presentation and collection of links also available at http://www.tandrau.com/mv_seminar
17. References A. Zavodny, P. Flynn, X. Chen. Region Extraction in Large-Scale Urban LIDAR Data. In 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops, pages 1801-1808. P. Flynn, A. Jain. Surface classification: hypothesis testing and parameter estimation. In Proceedings CVPR ‘88, pages 261-267, Jun 1988. C. C. Chen and I. Stamos. Range image segmentation for modeling and object detection in urban scenes. In Proc. 3DIM ’07, pages 185-192, 2007. P. K. Allen, A. Troccoli, B. Smith, I. Stamos, S. Murray. The Beauvais cathedral project. In Proc. Computer Vision and Pattern Recognition Workshop CVPRW ’03, volume 1, pages 10-10, June 2003. J. Poppinga, N. Vaskevicius, A. Birk, K. Pathak. Fast Plane Detection and Polygonalization in noisy 3D Range Images. In 2008 IROS International Conference on Intelligent Robots and Systems.
Notas del editor
Navteq history.Founded in 1985 - people driving around and caringdictatophones1988 – DriverGuide computer kyoskMove to a business model where Navteq is licensing their maps to other companies.Bought by Nokia for 8 billion $ in 2008Usage: - in car navigation systems (85% of car makers) - Garmin, Magellan, etc. - Nokia Maps - Microsoft Flight Simulator X
Built in 13th Century. Has collapsed 3 times, reconstructed the same number of times. Survived heavy bombing by the Germans during the WW2.- 3D models to track changes, foresee structural problems, virtual tours
Overview of the algorithm. The approach consists of multiple stages of processing.1.For every point we compute the likelihood that it lies in a planar neighborhood, the point’s planarity.We sort the points by planarity, and begin growing planar regions around points with highest planarity values.We then combine adjacent regions which represent same planar regions (comprised mostly of points with bad planarity measures).Some regions are then pruned, points in that region do not represent a planar section.
Covariance:Cov(X, Y) = Sum(I = 1, N) (Xmean – Xi) * (Ymean – Yi) / (n – 1)Covariance Matrix: C = (Cov(X, X), Cov(X, Y), Cov(X, Z) | Cov(Y, X), Cov(Y, Y), Cov(Y, Z) | Cov(Z, X), Cov(Z, Y), Cov(Z, Z))Eigenvalues: det(A – lambda * I) = 0Eigenvectors: A * v = lambda * vEquation of a plane: ax + by + cz = dIdentify: v3 = (a, b, c). Compute d = ax + by + cz
Planarity = scalar quantity that gives the likelihood that a point is part of a planar region.Define e-neighborhood for a point.Define best fitting plane F to a set of points. The best fitting plane for a set of points is calculated with PCA (Principal Component Analysis).Approach 1 measures the quality of fit of a plane to a set of points.Not the best option, since scanners tend to catch points more sparsely in foliage. Thus, a point’s e-neighborhood can fit well to a plane purely by chance.
A plane is uniquely defined by a point on the plane and a normal.Two passes: 1. For each point, compute the best fitting plane and keep the normal, associate the normal with that point. 2. Compute the planarity measure of a point as the variance of the normals of the points within it’s neighborhood.The units of measure are degrees squared.Define (e,k) - neighborhood
Iterative region growing algorithm for planar region detection.Each step consists of finding points to add to the region and updating the region’s planar approximation. To determine whether a point is added to the region, we check two values: - the smallest distance between it and any point already in the region (threshold Trd) – enforces a point density - it’s perpendicular distance to the current planar approximation (threshold Tpd) – limits the amount of perpendicular variation Once we obtain a list of points to be added, we fit a planar approximation to the new set of points, and remove all points in the region whose perpendicular distances are now above Tpd. To prevent the planar approximation from changing to drastically in early iterations, we update the region normal with a weighted combination of the new and previous normals of the plane N = ¼ * Nold + ¾ * NnewThe process can be accelerated by the usage of kd trees. More information in the seminar paper or in the original.
Tnra was set to 15 degrees in the experiments.Two regions are close if this distance is below a threshold Tnrd (set to 2 * Trd).
Moving vehicle equipped with three SICK LIDAR scanner (75Hz rotation, field of view 180 degree, half-degree resolution) => maximum 27.000 points/second.The data from all three scanners is run through a fusion process, where it is combined with spatial sensors on the vehicle. The result is given in a format that contains a tuple for each scan point: latitude, longitude and elevation.The fused data is the input to the algorithm.
Talk about why splitting data into subsets works, and improvements using a sliding window technique.Compute real-time possibilities based on the execution time.Computer used: 24-core (6 x 2.66 Xeon Quad Core), 88 GB Ram.Velodyne scanner, etc.