Abstract:
Multi-camera for 3D reconstruction can improve the accuracy and overcome occlusion, allowing for the acquisition of 3D position of targets from multiple viewpoints. In order to more accurately recover the distribution of targets in space, a convergent quad-vision camera 3D reconstruction system was introduced. A reconstruction platform was designed and built with four cameras evenly distributed around the target scene. After calibrating the relative pose of adjacent cameras in the system, the position and pose of each camera in a unified coordinate system were obtained through coordinate system transformation. The pose of the camera with the most transformations was verified, and the measurement results were consistent with those derived from transformation. A chessboard target array of size 66×65 was reconstructed, with a maximum relative error of 0.061% within a range of 45 mm. Compared with the fitted results, the root-mean-square (RMS) error was 0.319 3 μm. By using a metal block for reconstruction experiments, the shape could be recovered through its vertices. Experimental results show that the device can be used in high-precision and occlusion-resistant 3D reconstruction systems.