基于嵌入式GPU的特征畸变单目视觉定位系统的研究

Monocular visual positioning system with characteristic distortion based on embedded GPU

  • 摘要: 视觉定位和导航在物流仓储等领域具有广泛的应用前景,传统单目视觉难以实现准确的定位,而双目视觉虽能完成精确的定位和导航,但硬件成本高且影响车体尺寸。提出一种基于特征物形变的单目定位技术,利用单个相机对地面铺设的特征物(带编码的圆环图案)的畸变进行记录,以嵌入式图像处理单元(graphics processing unit,GPU)进行分析,实现端到端的单目视觉定位。其中嵌入式GPU对相机采集的图像通过深度学习目标检测算法识别特征圆环的编码图案,经传统图像处理获取图案物像的形变信息,将该形变信息输入至经极端梯度提升算法(extreme gradient boosting,XGBoost)训练好的回归模型,预测出相机相对图案中心的坐标,同时结合该特征圆环的绝对坐标,最终解算出相机的室内绝对坐标。实验结果表明:在2 m×2 m 的范围内定位平均误差仅为0.55 cm,优于文献报道1个数量级,且算法在电脑端和在嵌入式GPU上的定位解算帧率分别为20帧和4帧,具有实时性。

     

    Abstract: Visual positioning and navigation have a wide application prospect in logistics warehousing and other fields, but traditional monocular vision is difficult to achieve accurate positioning. Although binocular vision can achieve accurate positioning and navigation, the hardware cost is high and make vehicle to be bigger size. Therefore, a monocular positioning technology based on feature deformation was proposed. In this method, a single camera was used to record the distortion of features (ring patterns with encodings) laid on the ground with an embedded graphics processing unit (GPU) which analyzed the distortion, and achieved end-to-end monocular visual positioning. The embedded GPU recognized the encoded patterns of the feature ring through the deep-learning target detection algorithm for the images collected by the camera, and the deformation information of the pattern object through traditional image processing was obtained. The deformation information was input to a regression model trained by the extreme gradient boosting algorithm (XGBoost) to predict the coordinates of the camera relative to the center of the pattern. At the same time, combining the absolute coordinates of the feature ring, the indoor absolute coordinates of the camera were finally calculated. The experimental results show that the average positioning error in the range of 2 m×2 m is only 0.55 cm, which is one order of magnitude better than that reported in the literature. The algorithm has a real-time performance with a positioning solution frame rate of 20 frames on the computer and 4 frames on the embedded GPU.

     

/

返回文章
返回