Depth estimation based on adaptive fusion of infrared and visible light images progressively
-
Graphical Abstract
-
Abstract
Recovering the depth of scenes from images is a key issue in the field of computer vision. Considering that the single type images were limited by different illumination of scenes in depth estimation, a method of depth estimation based on the progressively fusion convolution neural network of infrared and visible light images (PF-CNN) was proposed. This method includes the two-stream filter partially coupled network (TFCNet), the adaptive multi-model feature fusion network (AMFNet) and adaptive progressively feature fusion network (APFNet). The filters of infrared and visible light images are partially coupled in the TFCNet to enhance the features of them. The AMFNet learns the residual features of infrared and visible light images and fused them adaptively to fully utilize the complementary information. The APFNet learns the combination of multi-layer fusion features to make full use of the different characteristics of different convolutional layers. The experimental results show that the proposed PF-CNN obtains best performances on the data set and increases the threshold accuracy by 5%, which is better than other methods significantly.
-
-