基于CNN-Transformer双流网络的烧结火焰燃烧状态分类

Classification of combustion state of sintering flame based on CNN-Transformer dual-stream network

  • 摘要: 烧结火焰图像中具有细粒度的局部火焰状态特征信息和复杂多变的全局火焰状态特征信息,而传统的卷积神经网络往往对局部特征更加敏感,难以提取火焰状态的全局特征信息,从而制约烧结火焰特征的表达能力,导致烧结火焰状态分类识别精度低。针对此类问题,提出一种基于CNN-Transformer的双流网络特征融合分类方法,该方法包含CNN(convolutional neural networks)流和Transformer流两个模块,首先将CNN块和Transformer块并行设计,CNN流提取烧结火焰RGB图像中的局部特征信息,Transformer流提取烧结火焰GRAY图像的全局特征信息;然后,将双流网络分别提取的烧结火焰状态局部特征信息和全局特征信息进行级联交互特征融合;最后,利用softmax分类器实现烧结火焰状态的分类。实验结果表明,火焰分类准确率可达96.20%,与传统卷积神经网络相比提升6%~8%的识别准确率。

     

    Abstract: The sintering flame image has fine-grained local flame state feature information and complex global flame state feature information. However, the traditional convolutional neural network is often more sensitive to local features, and it is difficult to extract the global feature information of the flame state, which restricts the expression ability of sintering flame features, resulting in low accuracy in the classification and recognition of the sintering flame state. In response to such problems, a dual-stream network feature fusion classification method based on CNN-Transformer was proposed, which includes two modules: convolutional neural networks (CNN) flow and Transformer flow. Firstly, the CNN block and the Transformer block were designed in parallel. The CNN stream extracts the local feature information of the RGB image of the sintering flame, and the Transformer stream extracts the global feature information of the GRAY image of the sintering flame. Then, the local feature information and the global feature information of the sintering flame state extracted by the dual-stream network was fused using the cascade interactive feature fusion method. Finally, the softmax classifier was used to achieve the classification of sintering flame states. The experimental results show that the flame classification accuracy can reach 96.20%, which is 6%~8% higher than that of the traditional convolutional neural network.

     

/

返回文章
返回