基于深度学习的无人车夜视图像语义分割

Semantic segmentation of night vision images for unmanned vehicles based on deep learning

  • 摘要: 为了增强无人车对夜视图像的场景理解,在夜间模式下更快更精确地探测和识别周围环境,将深度学习应用于夜视图像的场景语义分割,提出了一种基于卷积-反卷积神经网络的无人车夜视图像语义分割方法。在传统的卷积神经网络中加入反卷积网络,构建卷积-反卷积神经网络,无需手工选取特征。通过像素到像素的学习和训练,得到图像语义分割模型,可直接用该模型预测夜视图像中每个像素所属的场景语义类别,实现无人车夜间行驶时的环境感知。实验结果表明,该方法具有较好的准确性和实时性,平均IU达到68.47。

     

    Abstract: In order to assist unmanned vehicles in understanding scene of night vision images, detecting and identifying surrounding environment more quickly and accurately at night, a semantic segmentation method of unmmanned vehicle night vision images based on convolution-deconvolution neural network is proposed, which uses deep learning to segment scenery semant of night vision images. Convolution-deconvolution neural network is constructed by adding deconvolution network to traditional convolutional neural network, without selecting feature manually. By learning and training pixels-to-pixels, image semantic segmentation model can be obtained. The model can be used to predict scene semantic category of each pixel in night vision image, realizing environment perception of unmanned vehicles at night, which is import for automatic driving at night. Experimental results show that this method has good accuracy and real-time performance, and average IU reaches 68.47.

     

/

返回文章
返回