Remote sensing image fusion based on deep learning non-subsampled shearlet
-
Graphical Abstract
-
Abstract
Remote-sensing image fusion refers to the method of selectively and strategically combining image information with different observation characteristics obtained by different sensors to obtain a new image with better observation characteristics. A deep-sensing image fusion algorithm combined with non-subsampled shearlet transform (NSST) was proposed. In this algorithm, the spatial resolution of multi-spectral (MS) image is enhanced by an improved super-resolution reconstruction network. The panchromatic (PAN) image histogram-matched refers to each component of the reconstructed MS image. And the corresponding channel image is subjected to NSST transformation to obtain low-frequency sub-bands and several high-frequency direction sub-bands, respectively. To obtain low-frequency fusion coefficient, the low-frequency region uses an adaptive weighted average rule based on the gradient region, while the high-frequency sub-bands adopt the local spatial frequency maximum rule to obtain the high-frequency fusion coefficient, and finally the fused image can be obtained by inverse NSST transform reconstruction. The MS images City and Inland in different datasets were upsampled by the bicubic interpolation method. With the proposed algorithm, the general image quality index (UIQI) was 0.988 6 and 0.932 1, respectively, and the spectral angle mapping (SAM) was 1.872 1 and 2.143 2, respectively. Experimental results show that the image structure of the fusion algorithm in this paper is more clear, the saved spectral information is more complete, the fusion quality is better than the contrast algorithm, and the fusion image is more conducive to human visual observation.
-
-