Maintenance and training on industrial equipment is an important use for Augmented Reality, however, the objects themselves and their real world environments introduce significant challenges for reliable object recognition and tracking. Some developers and solutions address these by requiring the attachment of a marker (fiducial).
In this talk we’ll explain the obstacles to reliable and robust recognizing and tracking of complex industrial equipment and engine parts without using unique markers. We’ll show how to overcome the obstacles with a combination of computer vision and depth sensing technologies. We will share results of our hands-on experience with several commercial depth sensing cameras (including Intel® RealSense and the Occipital® Structure Sensor) to address these obstacles.
Augmented World Expo (AWE) is back for its seventh year in our largest conference and expo featuring technologies giving us superpowers: augmented reality (AR), virtual reality (VR) and wearable tech. Join over 4,000 attendees from all over the world including a mix of CEOs, CTOs, designers, developers, creative agencies, futurists, analysts, investors, and top press in a fantastic opportunity to learn, inspire, partner, and experience first hand the most exciting industry of our times. See more at http://AugmentedWorldExpo.com
David Marimon (Catchoom) Getting Rid of the Marker: Object Recognition and Tracking
1. #AWE2016
Getting rid of the marker:
object recognition and tracking
of industrial equipment
June 1st, 2016
Augmented World Expo US
#AWE2016
David Marimon
CEO & Co-founder
david@catchoom.com
+34 654 906 753
2. #AWE2016
Current Limitations of Training and Maintenance using AR
Markerless 3D Object Recognition of Industrial Equipment
Working with Depth Sensing Cameras
Content
6. #AWE2016
Catchoom has developed computer vision software to tackle
both limitations:
• Markerless: the object itself is recognized.
• Based only on images and depth: no CAD model needed.
Markerless 3D Object Recognition
7. #AWE2016
Current solution:
• Textureless: no need to have plenty of corners, edges, or
any well-defined pattern on the object.
• On-Device: everything can run inside the portable device of
the operator.
Markerless 3D Object Recognition
10. #AWE2016
Cameras:
• MS Kinect
• Intel Real Sense
• Structure Sensor from Occipital
Lessons learned:
• Cameras explored provide similar performance. The key to
success is the software that processes the raw data.
• Light conditions and materials can change results
significantly.
Working with Depth Cameras
12. #AWE2016
① The future of training and maintenance is augmented.
② Catchoom has developed markerless 3D object recognition
for industrial environments.
③ Depth sensing has still some challenges with light and
materials but software can help a lot.
Takeaways
13. #AWE2016
June 1st, 2016
Augmented World Expo US
#AWE2016
David Marimon
CEO & Co-founder
david@catchoom.com
+34 654 906 753
Come visit our booth for a live demo!
Notas del editor
Throughout the day, you’ll be seeing solutions like this one that can serve multiple purposes. From visualizing the state of a certain machine, to learning how to manipulate it on-site or with remote assistance. The operator is provided with guidance and sometime even life data coming from connected devices.
Let’s look at commercial approaches and how this is solved today.
There are several solutions in the market, that provide excellent tools for field operators. Among them are NGRAIN and iQAgent’s solutions.
Systems like the ones shown in the pictures make use of fiducial markersor QR codes in order to identify and augment the equipment.
As you can see on those pictures, this requires an extra step of placing those markers. In some cases, this may be a stopper, depending on the application or environment, or even customer requirements.
Those that do not rely on markers, often make use of 3D CAD models in order to recognize and track the object.
However, some manufacturers do not want to provide such CAD models to an integrator. This was a bit shocking the first time we encountered this while working for a car manufacturer but it makes perfect sense if you think about it for a second.
The reason is that industrial designs and IP in general is too important to make it available for a 3rd party integration, even if such integration is for to help operators use their own equipment or parts.
------
Thomas Perpère from Diota recently said in a webinar hosted by the AREA that AR becomes most interesting when humans meet complexity. Which in my vision of what AR can bring to industrial applications means, let’s make it less complex.
Catchoom is developing computer vision software that helps simplify the setup and makes the bridge between the equipment and the digitally content associated to that equipment seamless.
We use images and depth as input for the setup, no other source of information.
A Depth camera is a device that provides the distance from the camera to whatever object is depicted for every pixel.
They work in different ways, but one that is very common is by projecting a pattern with an infrared light emitter and then capturing the light that comes back from the real world and into de camera. Depending on the deformation of the patterns or the time it takes for the light to go out onto the world and reflected back into the sensor, it is possible to estimate the depth.
There are several commercial depth cameras in the market. They may look like this one from Occipital or if you use the Micorsoft Kinect, you have one at home already.
Regarding light, basically it interacts with the waveforms emitted and received by the camera. For instance, in outdoors, there is typically too much external light coming from the sun, and the emitter is not strong enough to overcome that.
Our suggestion is to consider depth sensors for indoors only. At least until computer vision based on RGB cameras compensates for that… and of course we’re also working on that.
As for materials, their reflective properties or even transparent, like glass, can cause some trouble to the sensors.
Speaking about materials, did you ever wondered why there is so much AR in Ironman? Well, it’s because everything is red!
However, some other superheros don’t think the same.
Black sucks all the light and poses serious trouble to depth sensors.
So, if you’re producing parts, please stay with grey (or pink would be even better).
So there’s still work to be done with those cameras. From our experience software makes the big difference, trying to overcome the challenges of light and materials.