Imprint on the future
Humanoid Robot for Flexible Picking
A robot with one pair of eyes, two arms and two hands, that effortlessly assembles a wheel at the same speed and with the same accuracy as a human would. This is the product that Robomotive is currently scoring very well with, whilst making its imprint on the future. A success story that system integrator Beltech and vision component supplier Phaer were only happy to co-write.
The robot of the future that can see
Michael Vermeer has already been bringing traditional, industrial robots to the market for many years. The General Manager of Robomotive says: "Until now, robots always carried out the same repetitive work, which is why we feel it is time to move one step forward and let robots handle a large diversity of products in small numbers. For this task, robots need eyes so that they can see what they are doing. This sight is made possible with 3D vision, an application that is becoming increasingly important in our sector, contributing to a more positive image of robots. No longer are they dirty, dumb and dangerous. They are now flexible, adaptive and intelligent. We involved system integrator Beltech for the vision hardware and software."
Product-specific grippers, jigs and feeders are expensive. To provide a good return on investment, the robot needed to use adaptive grippers and therefore be able to adapt to changes in the handled parts. With Robomotive solution, programs could be easily loaded depending on the product and task to be performed.
A robust vision module
Beltech is specialised in machine vision for industrial applications. Léon Bemelmans, Technical Director at Beltech, elaborates: "The reason why we called upon Phaer for the Robomotive vision module? This was actually quite a coincidence. Business Manager Koenraad Van de Veere just walked into our office one day with unique vision components. Of course we knew that Phaer, just like our company, has a long history in the world of vision. This experience created trust, which is why we wanted to test his components. We started working with the Photonfocus 3D-01 camera, the Aqsense peak detection and calibration tool, the Z Laser M18 and the Bitflow interface PCIe Board.
The result was a unique vision module that performed surprisingly well and which we confidently presented to Robomotive."
A perfect 3D reconstruction for precise guidance
To detect the objects position and orientation, a laser triangulation solution was used. "At the moment, laser triangulation is the best option for robot guidance. It is a very robust, reliable technique that won't be disturbed by variations in lighting or reflections" states Vermeer. "We are experimenting with other 3D vision technologies, but they aren't as robust as triangulation.". The Peak Detector built inside the Photonfocus camera was used to detect the laser maximum intensity point. "With the use of this Peak Detection, an up to 10x better detection can be achieved compared to a typical Center of Gravity, obtaining sharper surface details" notes Carles Matabosch, Aqsense's Technical Director. Moreover, a filter was used on the camera to filter out all light other than the Z-laser wavelength.
The scope of the triangulation system was to generate a 3D point of clouds with which 3D coordinates were obtained and transferred to the robot. The software written by Beltech had to calculate a path and a landing zone for the grippers with no objects blocking the movement. Moreover, point clouds could not be affected by perspective distortion so that precise 3D metric coordinates were applied and, for this purpose, Beltech used Aqsense's Metric Calibration System.
The resulting system computed the grippers landing zone, defined on a cylinder of the diameter of the gripper or a 3D model of the gripper, based on the 3D metric coordinates and the orientation plane and angle of the object, avoiding collisions between the robot and its surroundings.