Perceptually enhanced super-resolution reconstruction model based on deep back projection
-
-
Abstract
The super-resolution reconstruction models represented by the super-resolution convolutional neural network (SRCNN) models usually have high peak signal to noise ratio (PSNR) and structural similarity index measure (SSIM) values, but its visual perception is not satisfactory. And the generative adversarial networks (GAN) models represented by the super-resolution generative adversarial networks (SRGAN) having high perceptual quality is prone to produce a lot of false details, which is manifested in its low PSNR and SSIM values. To solve the above problems, a perceptually enhanced super-resolution reconstruction model based on deep back projection was proposed. The dual-scale self-adaptive weighted fusion feature extraction module was adopted by this model for feature extraction, then the sampling was carried out by the deep back projection, and finally the final output was obtained after the enhanced module was enhanced. The residual connections and dense connections were adopted by the model, which facilitated the features sharing and the effective training of the model. In the index evaluation, the learned perceptual image patch similarity (LPIPS) metric based on the learning was introduced as a new quality evaluation index of image perception, together with PSNR and SSIM as the model evaluation index. The experimental results show that the average values of PSNR, SSIM, and LPIPS of the model on the test data set are 27.84, 0.7320, and 0.1258, respectively, and all the indicators are better than the comparison algorithm.
-
-