黄安奇, 侯志强, 余旺盛, 刘翔. 基于改进巴氏指标和模型更新的视觉跟踪算法[J]. 应用光学, 2015, 36(1): 52-57. DOI: 10.5768/JAO201536.0102001
引用本文: 黄安奇, 侯志强, 余旺盛, 刘翔. 基于改进巴氏指标和模型更新的视觉跟踪算法[J]. 应用光学, 2015, 36(1): 52-57. DOI: 10.5768/JAO201536.0102001
Huang An-qi, Hou Zhi-qiang, Yu Wang-sheng, Liu Xiang. Visual tracking algorithm based on improved Bhattacharyya coefficient and model update strategy[J]. Journal of Applied Optics, 2015, 36(1): 52-57. DOI: 10.5768/JAO201536.0102001
Citation: Huang An-qi, Hou Zhi-qiang, Yu Wang-sheng, Liu Xiang. Visual tracking algorithm based on improved Bhattacharyya coefficient and model update strategy[J]. Journal of Applied Optics, 2015, 36(1): 52-57. DOI: 10.5768/JAO201536.0102001

基于改进巴氏指标和模型更新的视觉跟踪算法

Visual tracking algorithm based on improved Bhattacharyya coefficient and model update strategy

  • 摘要: 传统的Mean Shift算法采用巴氏系数度量模型与候选模型之间的统计特征相似性,但是由于背景特征的影响,有时应用巴氏指标进行匹配得到最优解的位置并不一定是目标的实际位置,在跟踪过程中可能导致目标定位出现偏差。该文提出一种改进的巴氏系数相似度指标,指标由于引入了前景/背景置信值,能够有效抑制待匹配区域中背景特征的影响,突出目标特征的权重,与原始的巴氏指标相比,明显提高了目标匹配的准确性。基于改进的巴氏指标,对目标与背景区域双模型相似度系数进行综合分析,合理地判断干扰目标匹配的原因,从而采取相应的模型更新策略。采用4段具有挑战性的视频序列对5种跟踪算法进行了测试,通过定量实验分析可知,文中算法处理1帧视频所需的平均时间为75.76 ms,实时性仅次于原始的Mean Shift跟踪算法,同时跟踪误差在5种跟踪算法中取得了最优结果。实验结果表明,该算法能够有效抑制背景干扰和避免模型漂移,在不同的复杂场景下都具有一定的鲁棒性。

     

    Abstract: In the traditional mean shift tracking algorithm, the Bhattacharyya coefficient is an efficient method in image statistical feature matching;however, due to the influence of background feature, the optimal location obtained by Bhattacharyya coefficient may not be the exact target location. Thus, biased or even wrong location may be got in visual tracking. We presented an improved Bhattacharyya coefficient based on target-background confidence .The new coefficient effectively reduced the influence of background feature and emphasized the importance of target feature, which obviously improved the target matching accuracy compared to the original coefficient. In order to get an effective model update strategy, we synthetically analyzed the similarity of target model and background model, and estimated the reason of the disturbance. We used 4 challenging video sequences to test 5 tracking algorithms. The quantitative experimental analysis shows that the proposed algorithm has good real-time performance, it only takes 75.76 ms to track one frame and exceeds the other trackers in tracking precision. The experimental result shows the proposed algorithm can well restrain background disturbance, while effectively update the model and overcome the problem of model drifting, and the tracking algorithm is effective and robust.

     

/

返回文章
返回