Abstract:
Environmental perception is a key task of driverless vehicles at night. An improved YOLOv3 network was proposed to realize the detection of pedestrians and vehicles in infrared images captured by driverless vehicles at night. The problem of estimation of the moving direction of surrounding vehicles is transformed into the problem of estimation of the angle of the surrounding vehicle position. What's more, the network is fused with the depth estimation information to estimate the distance and speed of the surrounding vehicles. Therefore the driverless vehicles can obtain the driving intention of the surrounding vehicles at night. The network has the end-to-end advantage, in which an image is as the input, and the positions of the bounding boxes, the classes and the angle estimation results of the detecting targets are returned directly at the output layer. Moreover, the depth estimation information is combined with the above information to obtain the distance and speed of the surrounding vehicle. The experimental results show that the speed of target detection in the infrared images captured by driverless vehicle is 0.04 s/frame. The effect of angle and speed prediction is good, and the accuracy and real-time performance meet the requirements of practical application.