Semantic SLAM based on visual SLAM and object detection
-
Graphical Abstract
-
Abstract
Semantic information is essential for mobile robots to understand the content of the environment and perform complex tasks. Aiming at the problem that the point clouds constructed by ORB-SLAM2 is too sparse and lacks semantic information, a dense point cloud semantic map of the environment by combining the object detection algorithm with visual SLAM technology was constructed. First of all, the object detection network YOLO v3 and object regularization were used to accurately obtain the 2D label of the object. At the same time, the ORB-SLAM2 algorithm was used to construct the environment's sparse point cloud map. The color image with 2D labels, corresponding depth images, and key frames were used to generate dense point cloud labels with semantic information. Then the graph-based segmentation algorithm was used to segment the dense point cloud, and the point cloud labels were fused with the segmented point cloud so as to construct a dense point cloud semantic map of the environment. The proposed method was tested on the TUM public data set and the experimental results show that the method can construct a better semantic map. Compared with the traditional ORB-SLAM2 algorithm, this system reduces the absolute pose error and absolute trajectory error of the camera by 16.02% and 15.86% respectively, in the process of constructing the map, which improves the mapping accuracy. In order to reduce the storage space of point cloud maps and facilitate mobile robots' navigation and avoidance, the constructed semantic maps are finally converted into octree maps.
-
-