Hottest robot 3D vision guidance technology boosts

2022-10-01
  • Detail

Robot 3D vision guidance technology, boosting the concept of "made in China 2025"

composition of robot 3D vision guidance solution

robot 3D vision guidance solution is mainly composed of 3D image acquisition scheme, 3D image processing scheme, hand eye calibration scheme and robot control scheme

because the robot control technology is different from the robot brand, this paper will not discuss the robot control technology for the time being, but mainly introduce the 3D image acquisition scheme, 3D image processing scheme and hand eye calibration scheme

3d image acquisition scheme

3d image acquisition scheme is divided into eye in hand mode and eye to hand mode

eye in hand mode: the 3D camera is installed on the manipulator, and the manipulator drives the 3D camera to scan the measured object according to the preset trajectory. The measured object needs to be within the field of view (FOV) and measurement range (MR) of the 3D camera

eye to hand mode: the 3D camera is installed on the gantry near the manipulator, and the gantry drives the 3D camera to scan the measured object. The measured object needs to be within the field of view (FOV) and measurement range (MR) of the 3D camera

in addition to the above installation methods, the 3D acquisition scheme is divided into passive light binocular, active light binocular, laser triangulation, structured light principle, TOF principle and other acquisition methods according to the different 3D imaging principles

passive light binocular stereo vision

passive light binocular vision is composed of two area/linear array cameras and light sources. The two cameras shoot the same position, and the height of the object can be calculated by using the parallax map. Two area/linear array cameras can shoot objects from different angles, or one area/linear array camera can shoot objects from different angles at different times

the advantage of passive light binocular is that there is no need for the camera to move relative to the object, and it can shoot a wide field of vision. The disadvantage is that when the surface contrast of the measured object is poor, the object cannot be recognized, and the 3D information of the object cannot be obtained. In addition, passive light binocular requires high recognition accuracy of each camera, otherwise the recognition error of a single camera will magnify the error in binocular. Due to these limitations, the use of passive light binoculars in industrial applications is relatively limited

in order to broaden the field of vision and eliminate the blind area, the passive light binocular technology can also be expanded to passive light multi eye

active light binocular stereo vision

in order to make up for the shortcomings of passive light binocular vision, engineers play some textured light on the tested object as an auxiliary. For example, hit some random points

the principle of active light binocular is the same as that of passive light binocular, which is also completed according to the aberration of two cameras. Because the active light source can better add texture to the measured object, it can enhance the versatility of this acquisition scheme

principle of laser triangulation

the vision technology of this principle mainly includes 2D camera, lens, laser, calibration algorithm and so on. The height information of the measured object can be obtained mainly by using the laser linear change captured in the 2D camera and the trigonometric formula

the installation method of laser triangulation is an important influencing factor. At present, the popular method in the market is to direct the measured object with a laser line, and shoot at a certain angle (i.e. measurement angle) between the 2D camera and the laser

2d camera resolution and installed measurement angle will affect the z-direction resolution. The greater the resolution of 2D camera, the greater the resolution in Z direction. However, due to the output of too much useless data, the improvement of scanning speed will be affected; The larger the measurement angle is, the larger the z-direction resolution is, but the larger the blind area is. Therefore, when building a 3D vision system, it is necessary to comprehensively consider the actual situation of the tested object and select the appropriate camera and installation method

in addition to the 2D camera and measurement angle, the beam quality of the laser is also the main factor affecting the measurement accuracy. Choosing an electronic laser with non Gaussian beam and good uniformity is a new product developed in recent years, which is very important to improve the measurement accuracy

laser triangulation principle the characteristics of vision technology are: X and Z direction information can be obtained at the same time, and Y direction information can be obtained by the relative motion between the camera and the measured object. It is suitable for measuring occasions with short distance, small field of vision, high speed and high precision

structured light principle

3D camera based on structured light principle is composed of camera and projector. The projector projects a series of fringe light, which is transformed according to the code. After the camera captures the fringe, the 3D information of the object is finally calculated. In order to eliminate blind spots, 3D cameras based on structured light principle will generally be built with two cameras and one projector

its characteristics are: the camera and the measured object must be relatively static, with high accuracy, but the acquisition time is long

tof principle

tof principle camera uses the time difference of light flight to obtain the height of objects. It can be used for 3D image acquisition with large field of vision, long distance, low precision and low cost

its characteristics are: fast detection speed, large field of vision, long working distance, cheap price, but low accuracy, easy to be disturbed by ambient light. Therefore, it is generally used indoors

3d image processing scheme

at present, robot 3D vision is widely used in automatic welding, automatic cutting, automatic assembly, automatic grabbing, automatic palletizing and so on. Generally, image processing is required to recognize the pose of objects or the 3D coordinates of object edges. Therefore, there are two problems to be solved for 3D image processing technology, object recognition and edge contour extraction. The following methods can be used to solve this problem

hand eye calibration scheme

the technology of hand eye calibration should solve the problem of transformation between image coordinates and robot coordinates. The calibration method is slightly different according to the eye in hand mode and the e-to-hand mode of ey's operating platform with PC control, but ultimately the coordinates of the measured object relative to the camera are converted into the coordinates of the measured object relative to the tool coordinate system

get the affine relationship between the object coordinate system and the camera coordinate system from the image, and then according to the hand eye calibration relationship - that is, the affine relationship between the camera coordinate system and the tool coordinate system, you can get the object sitting 1 Pointer type tensile testing machine: this kind of traditional tensile testing machine can reduce the amount of usage but achieve the same or higher strength due to the low testing accuracy and the affine between the reference frame and the tool coordinate system, so as to further extract the recognizable position and posture information of the manipulator for guidance

the affine relationship between the object coordinate system and the camera coordinate system is obtained from the image, and then the affine relationship between the object coordinate system and the tool coordinate system can be obtained according to the hand eye calibration relationship, that is, the affine relationship between the camera coordinate system and the base coordinate system, and the relationship between the base coordinate system and the tool coordinate system (the latter can be converted internally by the robot), Further extract the recognizable pose information of the manipulator to guide

Copyright © 2011 JIN SHI