Abstract:
To address the problem that infrared images and visible images have different feature expressions in different scenes, an saliency-based dual discriminator generative adversarial network method was proposed to fuse the infrared and visible feature information. Different from the traditional generative adversarial network, a dual discriminator approach was adopted to discriminate the saliency regions in the source images and the fusion images respectively in this algorithm, and the saliency regions of the two source images were used as the input of the discriminator so that the fusion image retained more salient features. The gradient constraint was introduced into its loss function so that the salient contrast and rich texture information could retain in the fusion image. The experimental results show that the proposed method outperforms other comparison algorithms in four evaluation indexes: entropy (EN), mean gradient (MG), spatial frequency (SF) and edge intensity (EI). This study achieves efficient fusion of infrared images and visible images, which is expected to gain applications in fields such as target recognition.