杨飞璠, 李晓光, 卓力. 基于注意力残差编解码网络的动态场景图像去模糊[J]. 应用光学, 2021, 42(4): 685-690. DOI: 10.5768/JAO202142.0402008
引用本文: 杨飞璠, 李晓光, 卓力. 基于注意力残差编解码网络的动态场景图像去模糊[J]. 应用光学, 2021, 42(4): 685-690. DOI: 10.5768/JAO202142.0402008
YANG Feifan, LI Xiaoguang, ZHUO Li. Image deblurring of dynamic scene based on attention residual CODEC network[J]. Journal of Applied Optics, 2021, 42(4): 685-690. DOI: 10.5768/JAO202142.0402008
Citation: YANG Feifan, LI Xiaoguang, ZHUO Li. Image deblurring of dynamic scene based on attention residual CODEC network[J]. Journal of Applied Optics, 2021, 42(4): 685-690. DOI: 10.5768/JAO202142.0402008

基于注意力残差编解码网络的动态场景图像去模糊

Image deblurring of dynamic scene based on attention residual CODEC network

  • 摘要: 动态场景下的图像去模糊技术是一个具有挑战性的计算机视觉问题。模糊图像不仅影响主观感受,还会影响后续的智能化分析的性能。提出了一种基于注意力残差编解码网络的动态场景图像去模糊方法。首先,编码阶段采用多个残差模块提取特征,加入空间注意力模块感知模糊的空间位置信息;其次,通过在网络中采用全局-局部残差连接策略融合多层卷积特征,减少信息丢失;最后,解码阶段生成具有清晰边缘结构的复原图像。实验结果显示,提出的算法在公开数据集上获得的峰值信噪比值为31.76 dB,结构相似性值为0.912。客观和主观质量评估表明,本文算法能够有效地复原包含丰富边缘轮廓信息的清晰图像,在对比算法中获得最优的性能。

     

    Abstract: The image deblurring technology of dynamic scene is a challenging computer vision problem. The blurry images not only affect the subjective perception but also affect the performance of the subsequent intelligent analysis. An image deblurring method of dynamic scene based on attention residual CODEC network was proposed. Firstly, in the coding stage, many residual modules were used to extract the features, and the spatial attention module was added to perceive the blurry spatial position information. Then, a global-local residual connection strategy in the internet was adopted to fuse the multi-layer convolution features to reduce the information loss. Finally, a restored image with clear edges and structure was generated in the decoding stage. The experimental results show that the peak signal-to-noise ratio (PSNR) obtained on the public data set is 31.76 dB, and the structural similarity index measure (SSIM) value is 0.912. Both the objective and subjective quality evaluations indicate that the proposed method can effectively recover the clear images containing abundant edge contour information, which obtains the optimal performance in the compared algorithm.

     

/

返回文章
返回