基于改进Faster RCNN的零件识别方法研究

Parts recognition method based on improved Faster RCNN

  • 摘要: 针对传统机器视觉检测识别方法受到装配零件之间相互遮挡、零件不同位姿、外部光照强度、小目标漏检影响,检测准确率不高的问题,提出了一种改进的Faster RCNN(region-based convolutional neural networks)零件识别方法。首先使用提取特征更好的ResNet101网络代替原始Faster RCNN模型中的VGG16特征提取网络;其次针对原始候选区域网络,增加2个新的锚点并重新设置候选框的纵横比,以得到15种尺寸不同的锚点;然后针对传统非极大值抑制(non-maximum suppression,NMS)方法因删除交并比大于阈值的候选框而出现漏检问题,使用Soft-NMS方法替换传统的NMS方法,从而减少密集区域漏检的情况;最后在训练模型阶段采用多尺度训练策略,降低漏检率,提高模型准确率。对零件的识别实验结果表明:改进后的Faster RCNN模型能够达到96.1%的精度,较原始模型提升了4.6%,可以满足光照较强、存在水渍干扰等较为复杂环境中零件的识别检测。

     

    Abstract: Aiming at the problems of mutual occlusion between assembly parts, different poses of parts, external light intensity, missed detection of small targets and low detection accuracy of traditional machine vision detection and recognition methods, a parts recognition method based on improved faster recurrent convolutional neural network (RCNN) was proposed. Firstly, the ResNet101 network with better feature extraction was used to replace VGG16 feature extraction network in original Faster RCNN model. Secondly, for the original candidate region network, the two new anchors were added and the aspect ratio of candidate frame was reset to obtain the 15 anchors with different sizes. Then, aiming at the missed detection problems caused by deleting the candidate frame in which the Intersection-over-Union (IoU) was greater than the threshold in traditional non-maximum suppression (NMS) method, the Soft-NMS method was used to replace the traditional NMS method, so as to reduce the missed detection problems in dense regions. Finally, in training model stage, the multi-scale training strategy was adopted to reduce the missed detection rate and improve the accuracy of the model. The experimental results show that the improved Faster RCNN model can achieve 96.1% accuracy, which is 4.6% higher than the original model, and can meet the recognition and detection of parts in complex conditions such as strong illumination and water stain interference.

     

/

返回文章
返回