Se ha denunciado esta presentación.
Utilizamos tu perfil de LinkedIn y tus datos de actividad para personalizar los anuncios y mostrarte publicidad más relevante. Puedes cambiar tus preferencias de publicidad en cualquier momento.

Henrik Christensen - Vision for co-robot applications

Presentation at the 2015 NRI PI meeting.

  • Sé el primero en comentar

  • Sé el primero en recomendar esto

Henrik Christensen - Vision for co-robot applications

  1. 1. Vision for Co-Robot Applications • Henrik I Christensen
 KUKA Chair of Robotics
 Robotics @ Georgia Tech
 Atlanta, Georgia
 
 Henrik.Christensen@gatech.edu
  2. 2. Co-X Robotics • Co-Worker
 Next Generation Manufacturing • Co-Inhabitants
 Assistance to People in Daily Lives • Co-Protectors
 Support for core industries
  3. 3. Co-Worker Roadmap
  4. 4. Perception Tasks • Pick-and-place task • Robots moving from controlled settings to unstructured environments • Robust object perception is crucial
  5. 5. Motivations Depth sensors are everywhere! 40 million Kinects sold Google ProjectTango Apple + PrimeSenseOccipital, Inc
  6. 6. Motivations 3D object models have been accumulated on the Internet! Known 3D object model was strong assumption
  7. 7. Google 3D warehouse (about 2.5 million models) Motivations
  8. 8. 3D 
 Object Models Pose Estimation & Tracking Object 
 ID & Pose ImageCamera The Problem
  9. 9. Challenges 1. Object with and withoutTextures 2. Background Clutter 3. Object Discontinuities 4. Real-time Constraints
  10. 10. Challenge 1: Texture • ... •Textured objects •Photometric: color, keypoints, edges or textures from surfaces •Textureless objects •Geometric: point coordinates, surface normals, depth discontinuities Handling both textured and textureless objects Employ both photometric and geometric features
  11. 11. Challenge 2: Clutter •False measurements •False pose estimates •Stuck in local minima •No table-top assumption Controlled environments Unstructured environments Difficulties = Degree of Clutter Multiple pose hypotheses frameworks: particle filtering for pose tracking and voting process for pose estimation
  12. 12. Challenge 3: Discontinuities • ... •Ideal vs Reality •Occluded by other objects, human, or robots •Object goes out of the camera’s field of view •Blurred in images •Re-initialization problem BlurOut of FOVOcclusions A re-initialization scheme by combining pose estimation and tracking
  13. 13. Challenge 4: Real-time •Constrained by timing limitations •Scarcely see real-time state-of-the-art Exploiting the power of parallel computation on GPU
  14. 14. Approaches • 2DVisual Information (Monocular Camera) – Combining Keypoint and Edge Features – HandlingTextureless Objects • 3DVisual Information (RGB-D Camera) – Voting-based Pose Estimation using Pair Features
 – Object PoseTracking
 photometric geometric
  15. 15. Overview Georgia Institute of Technology Atlanta, GA 30332, USA {cchoi,hic}@cc.gatech.edu h for 3D real-time ctly applicable to res for the initial n initial estimate hese two comple- bust tracking so- ncludes: 1) While e used simplified ned models, our model. To achieve are automatically sually invisible in en they constitute a fully automatic t of the previous ose initialization mes drift because tors the tracking tracking results rate our system’s Image Acquisition Model Rendering Edge Detection Pose Update with IRLS Error Calculation CAD Model Keyframes Keypoint Matching Pose Estimation !"#$%&'()*$+, -+)".+'/),$'0,12.1)34)%.+'/),$'0,12.1)3 ()3)%5+.6'7.2$6. Fig. 1: Overall system flow. We use a monocular camera. The initial pose of the object is estimated by using the SURF keypoint matching in the Global Pose Estimation (GPE). Using the initial pose, the Local Pose Estimation (LPE) consecutively estimates poses of the object utilizing RAPiD style tracking. keyframes and CAD model are employed as models by the GPE and LPE, respectively. The model are generated offline.16
  16. 16. 17our approach keypoint only edges model rendering
  17. 17. Particle Filter • Posterior p.d.f. as a set of weighted particles • non-linear, non-Gaussian, multi-modal • widely adopted in robotics, computer vision, etc
  18. 18. AR Dynamics • Instead of Gaussian random walk models • Linear prediction based on previous states • Propagate particles more effectively we mics ngs, cles ires tro- ure- nted zed II- d in are e of nge AR state dynamics is a good alternative since it is flexible, yet simple to implement. In (1), the term A(X, t) determines the state dynamics. A trivial case, A(X, t) = 0, is a random walk model. [13] modeled this via the first-order AR process on the Aff(2) as: Xt = Xt 1 · exp(At 1 + dWt ⌅ t), (3) At 1 = a log(X 1 t 2Xt 1) (4) where a is the AR process parameter. Since the SE(3) is a compact connected Lie group, the AR process model also holds on the SE(3) group [21]. C. Particle Initialization using keypoint Correspondences Most of the particle filter-based trackers assume that initial states are given. In practice, initial particles are crucial to ensure convergence to a true state. Several trackers [15], [14] search for the true state from scratch, but it is desirable to initialize particle states by using other information. Using
  19. 19. Re-initialization • Effective number of particle size, objects. In these cases, the tracker is required to re-in the tracking. In general sequential Monte Carlo metho effective particle size Neff has been introduced as a s measure of degeneracy [27]. Since it is hard to evaluate exactly, an alternative estimate [Neff is defined [27]: [Neff = 1 N i=1(˜(i))2 Often it has been used as a measure to execute the resam procedure. But, in our tracker we resample particles frame, and hence we use [Neff as a measure to initialization. When the number of effective particles is a fixed threshold Nthres, the re-initialization proced performed. The overall algorithm is shown in Algorit III. EXPERIMENTAL RESULTS In this section, we validate our proposed particle based tracker via various experiments. First, we co the performance of our approach with the previous 0 200 400 600 800 1000 1200 1400 0 50 100 Frame number N eff 0 200 400 600 800 1000 1200 1400 0 50 100 Frame number Neff Effective number of particle size
  20. 20. Experiments Single vs. Multiple pose hypotheses with vs. without AR state dynamics Reinitialization exp. 2D Monocular > Combining Keypoint and Edge Features [IJRR’12]
  21. 21. Robotic Assembly 1/2
  22. 22. Robotic Assembly
  23. 23. Robotic Assembly x 2
  24. 24. Robotic Assembly 2/2
  25. 25. 26
  26. 26. Real Sequence 27Ours PCL tracking Ours PCL tracking
  27. 27. 3D models on the Web 28
  28. 28. 29 real-time
  29. 29. 30
  30. 30. Robotic Assembly 31
  31. 31. Credits • Students / Postdocs • Changhyun Choi, now MIT • Heni Ben Amor, now ASU • Samarth Brahmbhatt, GT • Ana Huaman, GT • Sponsors: • Boeing, GM, PSA, • ARL & NSF
  32. 32. Summary • The next revolution in robotics will be driven by software not the traditional hardware. • Vision for closing loop control • Cloud based services for sharing of knowledge • Systems that collaborate with people

×