基于改进DeepLabv3+的无人车夜间红外图像语义分割

Semantic segmentation of nocturnal infrared images of unmanned vehicles based on improved DeepLabv3+

  • 摘要: 为了增强无人车对夜间场景的理解能力,针对无人车在夜间获取的红外图像,提出了一种基于改进DeepLabv3+网络的无人车夜间红外图像语义分割算法。由于自动驾驶场景中的对象往往显示出非常大的尺度变化,该算法在DeepLabv3+网络的基础上,通过引入密集连接的空洞卷积空间金字塔模块,使网络生成的多尺度特征能覆盖更大的尺度范围。此外,该算法将编码器模块的多层结果拼接在译码器模块中,以恢复更多在降采样过程中丢失的空间信息和低级特征。通过端到端的学习和训练,可直接用于对夜间红外图像的语义分割。实验结果表明,该算法在红外数据集上的分割精度优于原DeepLabv3+算法,平均交并比达到80.42,具有良好的实时性和准确性。

     

    Abstract: In order to enhance the understanding ability of unmanned vehicle to night scene, a semantic segmentation algorithm based on improved DeepLabv3+ network is proposed for infrared images acquired by unmanned vehicles at night. Because the objects in the autopilot scene often show very large scale changes, the algorithm based on DeepLabv3+ network can cover a larger scale range by introducing densely connected atrous spatial pyramid pooling module. In addition, the algorithm splices the multi-layer results of the encoder module into the decoder module to recover more spatial information and low-level features lost in the downsampling process. Through end-to-end learning and training, it can be directly used for semantics segmentation of night vision infrared images. The experimental results show that the segmentation accuracy of the algorithm is better than that of the original DeepLabv3+ algorithm, and the mean intersection over union reaches 80.42, which has good real-time performance and accuracy.

     

/

返回文章
返回